00:00:00.000 Started by upstream project "autotest-per-patch" build number 122886 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.126 Fetching changes from the remote Git repository 00:00:00.128 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.163 Using shallow fetch with depth 1 00:00:00.163 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.163 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.192 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.192 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.387 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.397 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.409 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:05.409 > git config core.sparsecheckout # timeout=10 00:00:05.419 > git read-tree -mu HEAD # timeout=10 00:00:05.434 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:05.454 Commit message: "inventory/dev: add missing long names" 00:00:05.454 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:05.571 [Pipeline] Start of Pipeline 00:00:05.585 [Pipeline] library 00:00:05.586 Loading library shm_lib@master 00:00:05.586 Library shm_lib@master is cached. Copying from home. 00:00:05.604 [Pipeline] node 00:00:20.606 Still waiting to schedule task 00:00:20.606 Waiting for next available executor on ‘vagrant-vm-host’ 00:06:38.178 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:06:38.180 [Pipeline] { 00:06:38.195 [Pipeline] catchError 00:06:38.197 [Pipeline] { 00:06:38.216 [Pipeline] wrap 00:06:38.227 [Pipeline] { 00:06:38.236 [Pipeline] stage 00:06:38.237 [Pipeline] { (Prologue) 00:06:38.256 [Pipeline] echo 00:06:38.257 Node: VM-host-SM4 00:06:38.261 [Pipeline] cleanWs 00:06:38.268 [WS-CLEANUP] Deleting project workspace... 00:06:38.268 [WS-CLEANUP] Deferred wipeout is used... 00:06:38.276 [WS-CLEANUP] done 00:06:38.441 [Pipeline] setCustomBuildProperty 00:06:38.513 [Pipeline] nodesByLabel 00:06:38.515 Found a total of 1 nodes with the 'sorcerer' label 00:06:38.526 [Pipeline] httpRequest 00:06:38.530 HttpMethod: GET 00:06:38.531 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:06:38.532 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:06:38.534 Response Code: HTTP/1.1 200 OK 00:06:38.534 Success: Status code 200 is in the accepted range: 200,404 00:06:38.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:06:38.674 [Pipeline] sh 00:06:38.952 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:06:38.974 [Pipeline] httpRequest 00:06:38.979 HttpMethod: GET 00:06:38.979 URL: http://10.211.164.101/packages/spdk_56756573693b99ea74c7bdacee713bdbf151966c.tar.gz 00:06:38.980 Sending request to url: http://10.211.164.101/packages/spdk_56756573693b99ea74c7bdacee713bdbf151966c.tar.gz 00:06:38.981 Response Code: HTTP/1.1 200 OK 00:06:38.981 Success: Status code 200 is in the accepted range: 200,404 00:06:38.981 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_56756573693b99ea74c7bdacee713bdbf151966c.tar.gz 00:06:41.131 [Pipeline] sh 00:06:41.406 + tar --no-same-owner -xf spdk_56756573693b99ea74c7bdacee713bdbf151966c.tar.gz 00:06:44.694 [Pipeline] sh 00:06:44.972 + git -C spdk log --oneline -n5 00:06:44.972 567565736 blob: add blob set external parent 00:06:44.972 0e4f7fc9b blob: add blob set parent 00:06:44.972 4506c0c36 test/common: Enable inherit_errexit 00:06:44.972 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:06:44.972 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:06:44.993 [Pipeline] writeFile 00:06:45.008 [Pipeline] sh 00:06:45.286 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:06:45.296 [Pipeline] sh 00:06:45.628 + cat autorun-spdk.conf 00:06:45.628 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:45.628 SPDK_TEST_NVMF=1 00:06:45.628 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:45.628 SPDK_TEST_USDT=1 00:06:45.628 SPDK_TEST_NVMF_MDNS=1 00:06:45.628 SPDK_RUN_UBSAN=1 00:06:45.628 NET_TYPE=virt 00:06:45.628 SPDK_JSONRPC_GO_CLIENT=1 00:06:45.628 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:45.634 RUN_NIGHTLY=0 00:06:45.638 [Pipeline] } 00:06:45.654 [Pipeline] // stage 00:06:45.672 [Pipeline] stage 00:06:45.675 [Pipeline] { (Run VM) 00:06:45.692 [Pipeline] sh 00:06:45.969 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:06:45.969 + echo 'Start stage prepare_nvme.sh' 00:06:45.969 Start stage prepare_nvme.sh 00:06:45.969 + [[ -n 3 ]] 00:06:45.969 + disk_prefix=ex3 00:06:45.969 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:06:45.969 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:06:45.969 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:06:45.969 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:45.969 ++ SPDK_TEST_NVMF=1 00:06:45.969 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:45.969 ++ SPDK_TEST_USDT=1 00:06:45.969 ++ SPDK_TEST_NVMF_MDNS=1 00:06:45.969 ++ SPDK_RUN_UBSAN=1 00:06:45.969 ++ NET_TYPE=virt 00:06:45.969 ++ SPDK_JSONRPC_GO_CLIENT=1 00:06:45.969 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:45.969 ++ RUN_NIGHTLY=0 00:06:45.969 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:06:45.969 + nvme_files=() 00:06:45.969 + declare -A nvme_files 00:06:45.969 + backend_dir=/var/lib/libvirt/images/backends 00:06:45.969 + nvme_files['nvme.img']=5G 00:06:45.969 + nvme_files['nvme-cmb.img']=5G 00:06:45.969 + nvme_files['nvme-multi0.img']=4G 00:06:45.969 + nvme_files['nvme-multi1.img']=4G 00:06:45.969 + nvme_files['nvme-multi2.img']=4G 00:06:45.969 + nvme_files['nvme-openstack.img']=8G 00:06:45.969 + nvme_files['nvme-zns.img']=5G 00:06:45.969 + (( SPDK_TEST_NVME_PMR == 1 )) 00:06:45.969 + (( SPDK_TEST_FTL == 1 )) 00:06:45.969 + (( SPDK_TEST_NVME_FDP == 1 )) 00:06:45.969 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:06:45.969 + for nvme in "${!nvme_files[@]}" 00:06:45.969 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:06:45.969 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:06:45.969 + for nvme in "${!nvme_files[@]}" 00:06:45.969 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:06:45.969 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:06:45.969 + for nvme in "${!nvme_files[@]}" 00:06:45.969 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:06:45.969 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:06:45.969 + for nvme in "${!nvme_files[@]}" 00:06:45.969 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:06:45.969 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:06:45.969 + for nvme in "${!nvme_files[@]}" 00:06:45.969 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:06:45.969 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:06:45.969 + for nvme in "${!nvme_files[@]}" 00:06:45.969 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:06:45.969 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:06:45.969 + for nvme in "${!nvme_files[@]}" 00:06:45.969 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:06:46.226 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:06:46.226 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:06:46.226 + echo 'End stage prepare_nvme.sh' 00:06:46.226 End stage prepare_nvme.sh 00:06:46.237 [Pipeline] sh 00:06:46.512 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:06:46.512 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:06:46.512 00:06:46.512 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:06:46.512 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:06:46.512 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:06:46.512 HELP=0 00:06:46.512 DRY_RUN=0 00:06:46.512 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:06:46.512 NVME_DISKS_TYPE=nvme,nvme, 00:06:46.512 NVME_AUTO_CREATE=0 00:06:46.512 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:06:46.512 NVME_CMB=,, 00:06:46.512 NVME_PMR=,, 00:06:46.512 NVME_ZNS=,, 00:06:46.512 NVME_MS=,, 00:06:46.512 NVME_FDP=,, 00:06:46.512 SPDK_VAGRANT_DISTRO=fedora38 00:06:46.512 SPDK_VAGRANT_VMCPU=10 00:06:46.512 SPDK_VAGRANT_VMRAM=12288 00:06:46.512 SPDK_VAGRANT_PROVIDER=libvirt 00:06:46.512 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:06:46.512 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:06:46.512 SPDK_OPENSTACK_NETWORK=0 00:06:46.512 VAGRANT_PACKAGE_BOX=0 00:06:46.512 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:06:46.512 FORCE_DISTRO=true 00:06:46.512 VAGRANT_BOX_VERSION= 00:06:46.512 EXTRA_VAGRANTFILES= 00:06:46.512 NIC_MODEL=e1000 00:06:46.512 00:06:46.512 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:06:46.512 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:06:50.692 Bringing machine 'default' up with 'libvirt' provider... 00:06:51.643 ==> default: Creating image (snapshot of base box volume). 00:06:51.643 ==> default: Creating domain with the following settings... 00:06:51.643 ==> default: -- Name: fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1715766448_74635ce897e50640438a 00:06:51.643 ==> default: -- Domain type: kvm 00:06:51.643 ==> default: -- Cpus: 10 00:06:51.643 ==> default: -- Feature: acpi 00:06:51.643 ==> default: -- Feature: apic 00:06:51.643 ==> default: -- Feature: pae 00:06:51.643 ==> default: -- Memory: 12288M 00:06:51.643 ==> default: -- Memory Backing: hugepages: 00:06:51.643 ==> default: -- Management MAC: 00:06:51.643 ==> default: -- Loader: 00:06:51.643 ==> default: -- Nvram: 00:06:51.643 ==> default: -- Base box: spdk/fedora38 00:06:51.643 ==> default: -- Storage pool: default 00:06:51.643 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1715766448_74635ce897e50640438a.img (20G) 00:06:51.643 ==> default: -- Volume Cache: default 00:06:51.643 ==> default: -- Kernel: 00:06:51.643 ==> default: -- Initrd: 00:06:51.643 ==> default: -- Graphics Type: vnc 00:06:51.643 ==> default: -- Graphics Port: -1 00:06:51.643 ==> default: -- Graphics IP: 127.0.0.1 00:06:51.643 ==> default: -- Graphics Password: Not defined 00:06:51.643 ==> default: -- Video Type: cirrus 00:06:51.643 ==> default: -- Video VRAM: 9216 00:06:51.643 ==> default: -- Sound Type: 00:06:51.643 ==> default: -- Keymap: en-us 00:06:51.643 ==> default: -- TPM Path: 00:06:51.643 ==> default: -- INPUT: type=mouse, bus=ps2 00:06:51.643 ==> default: -- Command line args: 00:06:51.643 ==> default: -> value=-device, 00:06:51.643 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:06:51.643 ==> default: -> value=-drive, 00:06:51.643 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:06:51.643 ==> default: -> value=-device, 00:06:51.643 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:51.643 ==> default: -> value=-device, 00:06:51.643 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:06:51.643 ==> default: -> value=-drive, 00:06:51.643 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:06:51.643 ==> default: -> value=-device, 00:06:51.643 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:51.643 ==> default: -> value=-drive, 00:06:51.643 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:06:51.643 ==> default: -> value=-device, 00:06:51.643 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:51.643 ==> default: -> value=-drive, 00:06:51.643 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:06:51.643 ==> default: -> value=-device, 00:06:51.643 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:06:51.902 ==> default: Creating shared folders metadata... 00:06:51.902 ==> default: Starting domain. 00:06:53.801 ==> default: Waiting for domain to get an IP address... 00:07:11.893 ==> default: Waiting for SSH to become available... 00:07:11.893 ==> default: Configuring and enabling network interfaces... 00:07:16.095 default: SSH address: 192.168.121.36:22 00:07:16.095 default: SSH username: vagrant 00:07:16.095 default: SSH auth method: private key 00:07:18.621 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:07:28.610 ==> default: Mounting SSHFS shared folder... 00:07:29.176 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:07:29.176 ==> default: Checking Mount.. 00:07:30.549 ==> default: Folder Successfully Mounted! 00:07:30.549 ==> default: Running provisioner: file... 00:07:31.484 default: ~/.gitconfig => .gitconfig 00:07:32.049 00:07:32.049 SUCCESS! 00:07:32.049 00:07:32.049 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:07:32.049 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:07:32.049 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:07:32.049 00:07:32.058 [Pipeline] } 00:07:32.076 [Pipeline] // stage 00:07:32.085 [Pipeline] dir 00:07:32.085 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:07:32.087 [Pipeline] { 00:07:32.100 [Pipeline] catchError 00:07:32.102 [Pipeline] { 00:07:32.115 [Pipeline] sh 00:07:32.392 + vagrant ssh-config --host vagrant 00:07:32.392 + sed -ne /^Host/,$p 00:07:32.392 + tee ssh_conf 00:07:36.574 Host vagrant 00:07:36.574 HostName 192.168.121.36 00:07:36.574 User vagrant 00:07:36.574 Port 22 00:07:36.574 UserKnownHostsFile /dev/null 00:07:36.574 StrictHostKeyChecking no 00:07:36.574 PasswordAuthentication no 00:07:36.574 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1701806725-069-updated-1701632595-patched-kernel/libvirt/fedora38 00:07:36.574 IdentitiesOnly yes 00:07:36.574 LogLevel FATAL 00:07:36.574 ForwardAgent yes 00:07:36.574 ForwardX11 yes 00:07:36.575 00:07:36.588 [Pipeline] withEnv 00:07:36.590 [Pipeline] { 00:07:36.607 [Pipeline] sh 00:07:36.884 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:07:36.884 source /etc/os-release 00:07:36.884 [[ -e /image.version ]] && img=$(< /image.version) 00:07:36.884 # Minimal, systemd-like check. 00:07:36.884 if [[ -e /.dockerenv ]]; then 00:07:36.884 # Clear garbage from the node's name: 00:07:36.884 # agt-er_autotest_547-896 -> autotest_547-896 00:07:36.884 # $HOSTNAME is the actual container id 00:07:36.884 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:07:36.884 if mountpoint -q /etc/hostname; then 00:07:36.884 # We can assume this is a mount from a host where container is running, 00:07:36.884 # so fetch its hostname to easily identify the target swarm worker. 00:07:36.884 container="$(< /etc/hostname) ($agent)" 00:07:36.884 else 00:07:36.884 # Fallback 00:07:36.884 container=$agent 00:07:36.884 fi 00:07:36.884 fi 00:07:36.884 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:07:36.884 00:07:37.152 [Pipeline] } 00:07:37.171 [Pipeline] // withEnv 00:07:37.179 [Pipeline] setCustomBuildProperty 00:07:37.192 [Pipeline] stage 00:07:37.194 [Pipeline] { (Tests) 00:07:37.212 [Pipeline] sh 00:07:37.488 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:07:37.758 [Pipeline] timeout 00:07:37.759 Timeout set to expire in 40 min 00:07:37.760 [Pipeline] { 00:07:37.776 [Pipeline] sh 00:07:38.052 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:07:38.985 HEAD is now at 567565736 blob: add blob set external parent 00:07:38.997 [Pipeline] sh 00:07:39.275 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:07:39.547 [Pipeline] sh 00:07:39.829 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:07:40.100 [Pipeline] sh 00:07:40.402 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:07:40.403 ++ readlink -f spdk_repo 00:07:40.660 + DIR_ROOT=/home/vagrant/spdk_repo 00:07:40.660 + [[ -n /home/vagrant/spdk_repo ]] 00:07:40.660 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:07:40.660 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:07:40.660 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:07:40.660 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:07:40.660 + [[ -d /home/vagrant/spdk_repo/output ]] 00:07:40.660 + cd /home/vagrant/spdk_repo 00:07:40.660 + source /etc/os-release 00:07:40.660 ++ NAME='Fedora Linux' 00:07:40.660 ++ VERSION='38 (Cloud Edition)' 00:07:40.660 ++ ID=fedora 00:07:40.660 ++ VERSION_ID=38 00:07:40.660 ++ VERSION_CODENAME= 00:07:40.660 ++ PLATFORM_ID=platform:f38 00:07:40.660 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:07:40.660 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:40.660 ++ LOGO=fedora-logo-icon 00:07:40.660 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:07:40.660 ++ HOME_URL=https://fedoraproject.org/ 00:07:40.660 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:07:40.660 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:40.660 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:40.660 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:40.660 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:07:40.660 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:40.660 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:07:40.660 ++ SUPPORT_END=2024-05-14 00:07:40.660 ++ VARIANT='Cloud Edition' 00:07:40.660 ++ VARIANT_ID=cloud 00:07:40.660 + uname -a 00:07:40.660 Linux fedora38-cloud-1701806725-069-updated-1701632595 6.5.12-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Dec 3 20:08:38 UTC 2023 x86_64 GNU/Linux 00:07:40.660 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:41.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.225 Hugepages 00:07:41.225 node hugesize free / total 00:07:41.225 node0 1048576kB 0 / 0 00:07:41.225 node0 2048kB 0 / 0 00:07:41.225 00:07:41.225 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:41.225 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:41.225 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:41.225 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:41.225 + rm -f /tmp/spdk-ld-path 00:07:41.225 + source autorun-spdk.conf 00:07:41.225 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:41.225 ++ SPDK_TEST_NVMF=1 00:07:41.225 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:41.225 ++ SPDK_TEST_USDT=1 00:07:41.225 ++ SPDK_TEST_NVMF_MDNS=1 00:07:41.225 ++ SPDK_RUN_UBSAN=1 00:07:41.225 ++ NET_TYPE=virt 00:07:41.225 ++ SPDK_JSONRPC_GO_CLIENT=1 00:07:41.225 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:41.225 ++ RUN_NIGHTLY=0 00:07:41.225 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:41.225 + [[ -n '' ]] 00:07:41.225 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:07:41.225 + for M in /var/spdk/build-*-manifest.txt 00:07:41.225 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:41.225 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:41.225 + for M in /var/spdk/build-*-manifest.txt 00:07:41.225 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:41.225 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:41.225 + for M in /var/spdk/build-*-manifest.txt 00:07:41.225 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:41.225 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:41.225 ++ uname 00:07:41.225 + [[ Linux == \L\i\n\u\x ]] 00:07:41.225 + sudo dmesg -T 00:07:41.225 + sudo dmesg --clear 00:07:41.483 + dmesg_pid=5030 00:07:41.483 + [[ Fedora Linux == FreeBSD ]] 00:07:41.483 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.483 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:41.483 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:41.483 + [[ -x /usr/src/fio-static/fio ]] 00:07:41.483 + sudo dmesg -Tw 00:07:41.483 + export FIO_BIN=/usr/src/fio-static/fio 00:07:41.483 + FIO_BIN=/usr/src/fio-static/fio 00:07:41.483 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:41.483 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:41.483 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:41.483 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.483 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:41.483 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:41.483 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.483 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:41.483 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:41.483 Test configuration: 00:07:41.483 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:41.483 SPDK_TEST_NVMF=1 00:07:41.483 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:41.483 SPDK_TEST_USDT=1 00:07:41.483 SPDK_TEST_NVMF_MDNS=1 00:07:41.483 SPDK_RUN_UBSAN=1 00:07:41.483 NET_TYPE=virt 00:07:41.483 SPDK_JSONRPC_GO_CLIENT=1 00:07:41.483 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:41.483 RUN_NIGHTLY=0 09:48:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.483 09:48:18 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:41.483 09:48:18 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.483 09:48:18 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.483 09:48:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.483 09:48:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.483 09:48:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.483 09:48:18 -- paths/export.sh@5 -- $ export PATH 00:07:41.483 09:48:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.483 09:48:18 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:07:41.483 09:48:18 -- common/autobuild_common.sh@437 -- $ date +%s 00:07:41.483 09:48:18 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715766498.XXXXXX 00:07:41.483 09:48:18 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715766498.HPlgxV 00:07:41.483 09:48:18 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:07:41.483 09:48:18 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:07:41.483 09:48:18 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:07:41.483 09:48:18 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:07:41.483 09:48:18 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:07:41.483 09:48:18 -- common/autobuild_common.sh@453 -- $ get_config_params 00:07:41.483 09:48:18 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:07:41.483 09:48:18 -- common/autotest_common.sh@10 -- $ set +x 00:07:41.483 09:48:18 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:07:41.483 09:48:18 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:07:41.483 09:48:18 -- pm/common@17 -- $ local monitor 00:07:41.483 09:48:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:41.483 09:48:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:41.483 09:48:18 -- pm/common@25 -- $ sleep 1 00:07:41.483 09:48:18 -- pm/common@21 -- $ date +%s 00:07:41.483 09:48:18 -- pm/common@21 -- $ date +%s 00:07:41.483 09:48:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715766498 00:07:41.483 09:48:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715766498 00:07:41.483 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715766498_collect-vmstat.pm.log 00:07:41.483 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715766498_collect-cpu-load.pm.log 00:07:42.414 09:48:19 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:07:42.414 09:48:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:42.414 09:48:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:42.414 09:48:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:07:42.414 09:48:19 -- spdk/autobuild.sh@16 -- $ date -u 00:07:42.414 Wed May 15 09:48:19 AM UTC 2024 00:07:42.414 09:48:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:42.671 v24.05-pre-660-g567565736 00:07:42.671 09:48:19 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:42.671 09:48:19 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:42.671 09:48:19 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:42.671 09:48:19 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:07:42.671 09:48:19 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:07:42.671 09:48:19 -- common/autotest_common.sh@10 -- $ set +x 00:07:42.671 ************************************ 00:07:42.671 START TEST ubsan 00:07:42.671 ************************************ 00:07:42.671 using ubsan 00:07:42.671 09:48:19 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:07:42.671 00:07:42.671 real 0m0.000s 00:07:42.671 user 0m0.000s 00:07:42.671 sys 0m0.000s 00:07:42.671 09:48:19 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:07:42.671 09:48:19 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:42.671 ************************************ 00:07:42.671 END TEST ubsan 00:07:42.671 ************************************ 00:07:42.671 09:48:19 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:42.671 09:48:19 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:42.671 09:48:19 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:42.671 09:48:19 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:07:42.671 09:48:19 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:42.671 09:48:19 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:42.671 09:48:19 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:42.671 09:48:19 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:07:42.671 09:48:19 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:07:42.671 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:42.671 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:43.237 Using 'verbs' RDMA provider 00:07:59.097 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:08:13.966 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:08:13.966 go version go1.21.1 linux/amd64 00:08:13.966 Creating mk/config.mk...done. 00:08:13.966 Creating mk/cc.flags.mk...done. 00:08:13.966 Type 'make' to build. 00:08:13.966 09:48:49 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:08:13.966 09:48:49 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:08:13.966 09:48:49 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:08:13.966 09:48:49 -- common/autotest_common.sh@10 -- $ set +x 00:08:13.966 ************************************ 00:08:13.966 START TEST make 00:08:13.966 ************************************ 00:08:13.966 09:48:49 make -- common/autotest_common.sh@1122 -- $ make -j10 00:08:13.966 make[1]: Nothing to be done for 'all'. 00:08:32.077 The Meson build system 00:08:32.077 Version: 1.3.0 00:08:32.077 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:08:32.077 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:08:32.077 Build type: native build 00:08:32.077 Program cat found: YES (/usr/bin/cat) 00:08:32.077 Project name: DPDK 00:08:32.077 Project version: 23.11.0 00:08:32.077 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:08:32.077 C linker for the host machine: cc ld.bfd 2.39-16 00:08:32.077 Host machine cpu family: x86_64 00:08:32.077 Host machine cpu: x86_64 00:08:32.077 Message: ## Building in Developer Mode ## 00:08:32.077 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:32.077 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:08:32.077 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:32.077 Program python3 found: YES (/usr/bin/python3) 00:08:32.077 Program cat found: YES (/usr/bin/cat) 00:08:32.077 Compiler for C supports arguments -march=native: YES 00:08:32.077 Checking for size of "void *" : 8 00:08:32.077 Checking for size of "void *" : 8 (cached) 00:08:32.077 Library m found: YES 00:08:32.077 Library numa found: YES 00:08:32.077 Has header "numaif.h" : YES 00:08:32.077 Library fdt found: NO 00:08:32.077 Library execinfo found: NO 00:08:32.077 Has header "execinfo.h" : YES 00:08:32.077 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:08:32.077 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:32.077 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:32.077 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:32.077 Run-time dependency openssl found: YES 3.0.9 00:08:32.077 Run-time dependency libpcap found: YES 1.10.4 00:08:32.077 Has header "pcap.h" with dependency libpcap: YES 00:08:32.077 Compiler for C supports arguments -Wcast-qual: YES 00:08:32.077 Compiler for C supports arguments -Wdeprecated: YES 00:08:32.077 Compiler for C supports arguments -Wformat: YES 00:08:32.077 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:32.077 Compiler for C supports arguments -Wformat-security: NO 00:08:32.077 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:32.077 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:32.077 Compiler for C supports arguments -Wnested-externs: YES 00:08:32.077 Compiler for C supports arguments -Wold-style-definition: YES 00:08:32.077 Compiler for C supports arguments -Wpointer-arith: YES 00:08:32.077 Compiler for C supports arguments -Wsign-compare: YES 00:08:32.077 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:32.077 Compiler for C supports arguments -Wundef: YES 00:08:32.077 Compiler for C supports arguments -Wwrite-strings: YES 00:08:32.077 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:32.077 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:32.077 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:32.077 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:32.077 Program objdump found: YES (/usr/bin/objdump) 00:08:32.077 Compiler for C supports arguments -mavx512f: YES 00:08:32.077 Checking if "AVX512 checking" compiles: YES 00:08:32.077 Fetching value of define "__SSE4_2__" : 1 00:08:32.077 Fetching value of define "__AES__" : 1 00:08:32.077 Fetching value of define "__AVX__" : 1 00:08:32.077 Fetching value of define "__AVX2__" : 1 00:08:32.077 Fetching value of define "__AVX512BW__" : 1 00:08:32.077 Fetching value of define "__AVX512CD__" : 1 00:08:32.077 Fetching value of define "__AVX512DQ__" : 1 00:08:32.077 Fetching value of define "__AVX512F__" : 1 00:08:32.077 Fetching value of define "__AVX512VL__" : 1 00:08:32.077 Fetching value of define "__PCLMUL__" : 1 00:08:32.077 Fetching value of define "__RDRND__" : 1 00:08:32.077 Fetching value of define "__RDSEED__" : 1 00:08:32.077 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:32.077 Fetching value of define "__znver1__" : (undefined) 00:08:32.077 Fetching value of define "__znver2__" : (undefined) 00:08:32.077 Fetching value of define "__znver3__" : (undefined) 00:08:32.077 Fetching value of define "__znver4__" : (undefined) 00:08:32.077 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:32.077 Message: lib/log: Defining dependency "log" 00:08:32.077 Message: lib/kvargs: Defining dependency "kvargs" 00:08:32.077 Message: lib/telemetry: Defining dependency "telemetry" 00:08:32.077 Checking for function "getentropy" : NO 00:08:32.077 Message: lib/eal: Defining dependency "eal" 00:08:32.077 Message: lib/ring: Defining dependency "ring" 00:08:32.077 Message: lib/rcu: Defining dependency "rcu" 00:08:32.077 Message: lib/mempool: Defining dependency "mempool" 00:08:32.077 Message: lib/mbuf: Defining dependency "mbuf" 00:08:32.077 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:32.077 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:32.077 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:32.077 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:32.077 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:32.077 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:08:32.077 Compiler for C supports arguments -mpclmul: YES 00:08:32.077 Compiler for C supports arguments -maes: YES 00:08:32.077 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:32.077 Compiler for C supports arguments -mavx512bw: YES 00:08:32.077 Compiler for C supports arguments -mavx512dq: YES 00:08:32.077 Compiler for C supports arguments -mavx512vl: YES 00:08:32.077 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:32.077 Compiler for C supports arguments -mavx2: YES 00:08:32.077 Compiler for C supports arguments -mavx: YES 00:08:32.077 Message: lib/net: Defining dependency "net" 00:08:32.077 Message: lib/meter: Defining dependency "meter" 00:08:32.077 Message: lib/ethdev: Defining dependency "ethdev" 00:08:32.077 Message: lib/pci: Defining dependency "pci" 00:08:32.078 Message: lib/cmdline: Defining dependency "cmdline" 00:08:32.078 Message: lib/hash: Defining dependency "hash" 00:08:32.078 Message: lib/timer: Defining dependency "timer" 00:08:32.078 Message: lib/compressdev: Defining dependency "compressdev" 00:08:32.078 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:32.078 Message: lib/dmadev: Defining dependency "dmadev" 00:08:32.078 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:32.078 Message: lib/power: Defining dependency "power" 00:08:32.078 Message: lib/reorder: Defining dependency "reorder" 00:08:32.078 Message: lib/security: Defining dependency "security" 00:08:32.078 Has header "linux/userfaultfd.h" : YES 00:08:32.078 Has header "linux/vduse.h" : YES 00:08:32.078 Message: lib/vhost: Defining dependency "vhost" 00:08:32.078 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:32.078 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:32.078 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:32.078 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:32.078 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:32.078 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:32.078 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:32.078 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:32.078 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:32.078 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:32.078 Program doxygen found: YES (/usr/bin/doxygen) 00:08:32.078 Configuring doxy-api-html.conf using configuration 00:08:32.078 Configuring doxy-api-man.conf using configuration 00:08:32.078 Program mandb found: YES (/usr/bin/mandb) 00:08:32.078 Program sphinx-build found: NO 00:08:32.078 Configuring rte_build_config.h using configuration 00:08:32.078 Message: 00:08:32.078 ================= 00:08:32.078 Applications Enabled 00:08:32.078 ================= 00:08:32.078 00:08:32.078 apps: 00:08:32.078 00:08:32.078 00:08:32.078 Message: 00:08:32.078 ================= 00:08:32.078 Libraries Enabled 00:08:32.078 ================= 00:08:32.078 00:08:32.078 libs: 00:08:32.078 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:32.078 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:32.078 cryptodev, dmadev, power, reorder, security, vhost, 00:08:32.078 00:08:32.078 Message: 00:08:32.078 =============== 00:08:32.078 Drivers Enabled 00:08:32.078 =============== 00:08:32.078 00:08:32.078 common: 00:08:32.078 00:08:32.078 bus: 00:08:32.078 pci, vdev, 00:08:32.078 mempool: 00:08:32.078 ring, 00:08:32.078 dma: 00:08:32.078 00:08:32.078 net: 00:08:32.078 00:08:32.078 crypto: 00:08:32.078 00:08:32.078 compress: 00:08:32.078 00:08:32.078 vdpa: 00:08:32.078 00:08:32.078 00:08:32.078 Message: 00:08:32.078 ================= 00:08:32.078 Content Skipped 00:08:32.078 ================= 00:08:32.078 00:08:32.078 apps: 00:08:32.078 dumpcap: explicitly disabled via build config 00:08:32.078 graph: explicitly disabled via build config 00:08:32.078 pdump: explicitly disabled via build config 00:08:32.078 proc-info: explicitly disabled via build config 00:08:32.078 test-acl: explicitly disabled via build config 00:08:32.078 test-bbdev: explicitly disabled via build config 00:08:32.078 test-cmdline: explicitly disabled via build config 00:08:32.078 test-compress-perf: explicitly disabled via build config 00:08:32.078 test-crypto-perf: explicitly disabled via build config 00:08:32.078 test-dma-perf: explicitly disabled via build config 00:08:32.078 test-eventdev: explicitly disabled via build config 00:08:32.078 test-fib: explicitly disabled via build config 00:08:32.078 test-flow-perf: explicitly disabled via build config 00:08:32.078 test-gpudev: explicitly disabled via build config 00:08:32.078 test-mldev: explicitly disabled via build config 00:08:32.078 test-pipeline: explicitly disabled via build config 00:08:32.078 test-pmd: explicitly disabled via build config 00:08:32.078 test-regex: explicitly disabled via build config 00:08:32.078 test-sad: explicitly disabled via build config 00:08:32.078 test-security-perf: explicitly disabled via build config 00:08:32.078 00:08:32.078 libs: 00:08:32.078 metrics: explicitly disabled via build config 00:08:32.078 acl: explicitly disabled via build config 00:08:32.078 bbdev: explicitly disabled via build config 00:08:32.078 bitratestats: explicitly disabled via build config 00:08:32.078 bpf: explicitly disabled via build config 00:08:32.078 cfgfile: explicitly disabled via build config 00:08:32.078 distributor: explicitly disabled via build config 00:08:32.078 efd: explicitly disabled via build config 00:08:32.078 eventdev: explicitly disabled via build config 00:08:32.078 dispatcher: explicitly disabled via build config 00:08:32.078 gpudev: explicitly disabled via build config 00:08:32.078 gro: explicitly disabled via build config 00:08:32.078 gso: explicitly disabled via build config 00:08:32.078 ip_frag: explicitly disabled via build config 00:08:32.078 jobstats: explicitly disabled via build config 00:08:32.078 latencystats: explicitly disabled via build config 00:08:32.078 lpm: explicitly disabled via build config 00:08:32.078 member: explicitly disabled via build config 00:08:32.078 pcapng: explicitly disabled via build config 00:08:32.078 rawdev: explicitly disabled via build config 00:08:32.078 regexdev: explicitly disabled via build config 00:08:32.078 mldev: explicitly disabled via build config 00:08:32.078 rib: explicitly disabled via build config 00:08:32.078 sched: explicitly disabled via build config 00:08:32.078 stack: explicitly disabled via build config 00:08:32.078 ipsec: explicitly disabled via build config 00:08:32.078 pdcp: explicitly disabled via build config 00:08:32.078 fib: explicitly disabled via build config 00:08:32.078 port: explicitly disabled via build config 00:08:32.078 pdump: explicitly disabled via build config 00:08:32.078 table: explicitly disabled via build config 00:08:32.078 pipeline: explicitly disabled via build config 00:08:32.078 graph: explicitly disabled via build config 00:08:32.078 node: explicitly disabled via build config 00:08:32.078 00:08:32.078 drivers: 00:08:32.078 common/cpt: not in enabled drivers build config 00:08:32.078 common/dpaax: not in enabled drivers build config 00:08:32.078 common/iavf: not in enabled drivers build config 00:08:32.078 common/idpf: not in enabled drivers build config 00:08:32.078 common/mvep: not in enabled drivers build config 00:08:32.078 common/octeontx: not in enabled drivers build config 00:08:32.078 bus/auxiliary: not in enabled drivers build config 00:08:32.078 bus/cdx: not in enabled drivers build config 00:08:32.078 bus/dpaa: not in enabled drivers build config 00:08:32.078 bus/fslmc: not in enabled drivers build config 00:08:32.078 bus/ifpga: not in enabled drivers build config 00:08:32.078 bus/platform: not in enabled drivers build config 00:08:32.078 bus/vmbus: not in enabled drivers build config 00:08:32.078 common/cnxk: not in enabled drivers build config 00:08:32.078 common/mlx5: not in enabled drivers build config 00:08:32.078 common/nfp: not in enabled drivers build config 00:08:32.078 common/qat: not in enabled drivers build config 00:08:32.078 common/sfc_efx: not in enabled drivers build config 00:08:32.078 mempool/bucket: not in enabled drivers build config 00:08:32.078 mempool/cnxk: not in enabled drivers build config 00:08:32.078 mempool/dpaa: not in enabled drivers build config 00:08:32.078 mempool/dpaa2: not in enabled drivers build config 00:08:32.078 mempool/octeontx: not in enabled drivers build config 00:08:32.078 mempool/stack: not in enabled drivers build config 00:08:32.078 dma/cnxk: not in enabled drivers build config 00:08:32.078 dma/dpaa: not in enabled drivers build config 00:08:32.078 dma/dpaa2: not in enabled drivers build config 00:08:32.078 dma/hisilicon: not in enabled drivers build config 00:08:32.078 dma/idxd: not in enabled drivers build config 00:08:32.078 dma/ioat: not in enabled drivers build config 00:08:32.078 dma/skeleton: not in enabled drivers build config 00:08:32.078 net/af_packet: not in enabled drivers build config 00:08:32.078 net/af_xdp: not in enabled drivers build config 00:08:32.078 net/ark: not in enabled drivers build config 00:08:32.078 net/atlantic: not in enabled drivers build config 00:08:32.078 net/avp: not in enabled drivers build config 00:08:32.078 net/axgbe: not in enabled drivers build config 00:08:32.078 net/bnx2x: not in enabled drivers build config 00:08:32.078 net/bnxt: not in enabled drivers build config 00:08:32.078 net/bonding: not in enabled drivers build config 00:08:32.078 net/cnxk: not in enabled drivers build config 00:08:32.078 net/cpfl: not in enabled drivers build config 00:08:32.078 net/cxgbe: not in enabled drivers build config 00:08:32.078 net/dpaa: not in enabled drivers build config 00:08:32.078 net/dpaa2: not in enabled drivers build config 00:08:32.078 net/e1000: not in enabled drivers build config 00:08:32.078 net/ena: not in enabled drivers build config 00:08:32.078 net/enetc: not in enabled drivers build config 00:08:32.078 net/enetfec: not in enabled drivers build config 00:08:32.078 net/enic: not in enabled drivers build config 00:08:32.078 net/failsafe: not in enabled drivers build config 00:08:32.078 net/fm10k: not in enabled drivers build config 00:08:32.079 net/gve: not in enabled drivers build config 00:08:32.079 net/hinic: not in enabled drivers build config 00:08:32.079 net/hns3: not in enabled drivers build config 00:08:32.079 net/i40e: not in enabled drivers build config 00:08:32.079 net/iavf: not in enabled drivers build config 00:08:32.079 net/ice: not in enabled drivers build config 00:08:32.079 net/idpf: not in enabled drivers build config 00:08:32.079 net/igc: not in enabled drivers build config 00:08:32.079 net/ionic: not in enabled drivers build config 00:08:32.079 net/ipn3ke: not in enabled drivers build config 00:08:32.079 net/ixgbe: not in enabled drivers build config 00:08:32.079 net/mana: not in enabled drivers build config 00:08:32.079 net/memif: not in enabled drivers build config 00:08:32.079 net/mlx4: not in enabled drivers build config 00:08:32.079 net/mlx5: not in enabled drivers build config 00:08:32.079 net/mvneta: not in enabled drivers build config 00:08:32.079 net/mvpp2: not in enabled drivers build config 00:08:32.079 net/netvsc: not in enabled drivers build config 00:08:32.079 net/nfb: not in enabled drivers build config 00:08:32.079 net/nfp: not in enabled drivers build config 00:08:32.079 net/ngbe: not in enabled drivers build config 00:08:32.079 net/null: not in enabled drivers build config 00:08:32.079 net/octeontx: not in enabled drivers build config 00:08:32.079 net/octeon_ep: not in enabled drivers build config 00:08:32.079 net/pcap: not in enabled drivers build config 00:08:32.079 net/pfe: not in enabled drivers build config 00:08:32.079 net/qede: not in enabled drivers build config 00:08:32.079 net/ring: not in enabled drivers build config 00:08:32.079 net/sfc: not in enabled drivers build config 00:08:32.079 net/softnic: not in enabled drivers build config 00:08:32.079 net/tap: not in enabled drivers build config 00:08:32.079 net/thunderx: not in enabled drivers build config 00:08:32.079 net/txgbe: not in enabled drivers build config 00:08:32.079 net/vdev_netvsc: not in enabled drivers build config 00:08:32.079 net/vhost: not in enabled drivers build config 00:08:32.079 net/virtio: not in enabled drivers build config 00:08:32.079 net/vmxnet3: not in enabled drivers build config 00:08:32.079 raw/*: missing internal dependency, "rawdev" 00:08:32.079 crypto/armv8: not in enabled drivers build config 00:08:32.079 crypto/bcmfs: not in enabled drivers build config 00:08:32.079 crypto/caam_jr: not in enabled drivers build config 00:08:32.079 crypto/ccp: not in enabled drivers build config 00:08:32.079 crypto/cnxk: not in enabled drivers build config 00:08:32.079 crypto/dpaa_sec: not in enabled drivers build config 00:08:32.079 crypto/dpaa2_sec: not in enabled drivers build config 00:08:32.079 crypto/ipsec_mb: not in enabled drivers build config 00:08:32.079 crypto/mlx5: not in enabled drivers build config 00:08:32.079 crypto/mvsam: not in enabled drivers build config 00:08:32.079 crypto/nitrox: not in enabled drivers build config 00:08:32.079 crypto/null: not in enabled drivers build config 00:08:32.079 crypto/octeontx: not in enabled drivers build config 00:08:32.079 crypto/openssl: not in enabled drivers build config 00:08:32.079 crypto/scheduler: not in enabled drivers build config 00:08:32.079 crypto/uadk: not in enabled drivers build config 00:08:32.079 crypto/virtio: not in enabled drivers build config 00:08:32.079 compress/isal: not in enabled drivers build config 00:08:32.079 compress/mlx5: not in enabled drivers build config 00:08:32.079 compress/octeontx: not in enabled drivers build config 00:08:32.079 compress/zlib: not in enabled drivers build config 00:08:32.079 regex/*: missing internal dependency, "regexdev" 00:08:32.079 ml/*: missing internal dependency, "mldev" 00:08:32.079 vdpa/ifc: not in enabled drivers build config 00:08:32.079 vdpa/mlx5: not in enabled drivers build config 00:08:32.079 vdpa/nfp: not in enabled drivers build config 00:08:32.079 vdpa/sfc: not in enabled drivers build config 00:08:32.079 event/*: missing internal dependency, "eventdev" 00:08:32.079 baseband/*: missing internal dependency, "bbdev" 00:08:32.079 gpu/*: missing internal dependency, "gpudev" 00:08:32.079 00:08:32.079 00:08:32.079 Build targets in project: 85 00:08:32.079 00:08:32.079 DPDK 23.11.0 00:08:32.079 00:08:32.079 User defined options 00:08:32.079 buildtype : debug 00:08:32.079 default_library : shared 00:08:32.079 libdir : lib 00:08:32.079 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:32.079 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:08:32.079 c_link_args : 00:08:32.079 cpu_instruction_set: native 00:08:32.079 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:08:32.079 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:08:32.079 enable_docs : false 00:08:32.079 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:08:32.079 enable_kmods : false 00:08:32.079 tests : false 00:08:32.079 00:08:32.079 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:32.336 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:08:32.336 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:32.336 [2/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:32.336 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:32.336 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:32.336 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:32.336 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:32.336 [7/265] Linking static target lib/librte_kvargs.a 00:08:32.336 [8/265] Linking static target lib/librte_log.a 00:08:32.594 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:32.594 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:32.853 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:32.853 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:32.853 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:33.112 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:33.112 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:33.112 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:33.112 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:33.112 [18/265] Linking static target lib/librte_telemetry.a 00:08:33.112 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:33.370 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:33.370 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:33.632 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:33.632 [23/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.632 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:33.632 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:33.889 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:33.889 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:33.889 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:33.889 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:34.146 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:34.403 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:34.403 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:34.403 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:34.403 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:34.403 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:34.403 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:34.403 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:34.403 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:34.661 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:34.661 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:34.661 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:34.918 [42/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.918 [43/265] Linking target lib/librte_log.so.24.0 00:08:35.175 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:35.175 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:35.175 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:35.175 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:35.175 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:35.175 [49/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.175 [50/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:08:35.433 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:35.433 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:35.433 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:35.433 [54/265] Linking target lib/librte_kvargs.so.24.0 00:08:35.433 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:35.433 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:35.433 [57/265] Linking target lib/librte_telemetry.so.24.0 00:08:35.690 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:35.690 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:35.690 [60/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:08:35.690 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:35.690 [62/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:08:35.998 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:35.999 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:35.999 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:35.999 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:36.257 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:36.257 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:36.257 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:36.514 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:36.514 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:36.514 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:36.514 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:36.514 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:36.514 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:36.515 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:36.772 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:36.773 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:36.773 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:36.773 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:37.030 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:37.030 [82/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:37.287 [83/265] Linking static target lib/librte_ring.a 00:08:37.287 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:37.287 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:37.287 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:37.544 [87/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:37.544 [88/265] Linking static target lib/librte_eal.a 00:08:37.544 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:37.800 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:37.800 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:37.800 [92/265] Linking static target lib/librte_rcu.a 00:08:37.800 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:37.800 [94/265] Linking static target lib/librte_mempool.a 00:08:38.057 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:38.057 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:38.330 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:38.330 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:38.330 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:38.330 [100/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.587 [101/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:38.587 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:38.587 [103/265] Linking static target lib/librte_mbuf.a 00:08:38.844 [104/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:38.844 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:38.844 [106/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:38.844 [107/265] Linking static target lib/librte_meter.a 00:08:38.844 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:38.844 [109/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.844 [110/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:39.101 [111/265] Linking static target lib/librte_net.a 00:08:39.359 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:39.359 [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.359 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:39.359 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:39.616 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:39.875 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.875 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:40.132 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:40.132 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:40.390 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:40.390 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:40.647 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:40.647 [124/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.647 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:40.647 [126/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:40.647 [127/265] Linking static target lib/librte_pci.a 00:08:40.904 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:40.904 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:40.904 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:40.904 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:41.162 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:41.162 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:41.162 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:41.162 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:41.162 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:41.162 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:41.162 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:41.162 [139/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:41.162 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:41.162 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:41.419 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:41.419 [143/265] Linking static target lib/librte_ethdev.a 00:08:41.419 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:41.419 [145/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:41.419 [146/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:41.419 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:41.677 [148/265] Linking static target lib/librte_cmdline.a 00:08:41.934 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:41.934 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:41.934 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:41.934 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:42.191 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:42.191 [154/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:42.191 [155/265] Linking static target lib/librte_hash.a 00:08:42.191 [156/265] Linking static target lib/librte_timer.a 00:08:42.448 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:42.448 [158/265] Linking static target lib/librte_compressdev.a 00:08:42.448 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:42.448 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:42.448 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:42.706 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:42.964 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:42.964 [164/265] Linking static target lib/librte_dmadev.a 00:08:42.964 [165/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:43.221 [166/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:43.221 [167/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:43.221 [168/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:43.221 [169/265] Linking static target lib/librte_cryptodev.a 00:08:43.221 [170/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:43.221 [171/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.478 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:43.736 [173/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:43.736 [174/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.993 [175/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.993 [176/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.993 [177/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:43.993 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:43.993 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:44.249 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:44.506 [181/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:44.506 [182/265] Linking static target lib/librte_reorder.a 00:08:44.763 [183/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:44.763 [184/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:44.763 [185/265] Linking static target lib/librte_power.a 00:08:44.763 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:45.020 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:45.020 [188/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.020 [189/265] Linking static target lib/librte_security.a 00:08:45.020 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:45.278 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.843 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:45.843 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:45.843 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:46.119 [195/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:46.119 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:46.377 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:46.377 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:46.634 [199/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:46.634 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:46.634 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:46.892 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:46.892 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:46.892 [204/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:46.892 [205/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:46.892 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:46.892 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:46.892 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:46.892 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:47.150 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:47.150 [211/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:47.150 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:47.150 [213/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:47.150 [214/265] Linking static target drivers/librte_bus_pci.a 00:08:47.150 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:47.150 [216/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:47.150 [217/265] Linking static target drivers/librte_bus_vdev.a 00:08:47.408 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:47.408 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:47.665 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.665 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:47.665 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:47.665 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:47.665 [224/265] Linking static target drivers/librte_mempool_ring.a 00:08:47.923 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:48.490 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:48.490 [227/265] Linking static target lib/librte_vhost.a 00:08:50.392 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.327 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:52.703 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:52.703 [231/265] Linking target lib/librte_eal.so.24.0 00:08:52.962 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:08:52.962 [233/265] Linking target lib/librte_dmadev.so.24.0 00:08:52.962 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:08:52.962 [235/265] Linking target lib/librte_ring.so.24.0 00:08:52.962 [236/265] Linking target lib/librte_pci.so.24.0 00:08:52.962 [237/265] Linking target lib/librte_timer.so.24.0 00:08:52.962 [238/265] Linking target lib/librte_meter.so.24.0 00:08:53.221 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:08:53.221 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:08:53.221 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:08:53.221 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:08:53.221 [243/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:08:53.221 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:08:53.221 [245/265] Linking target lib/librte_rcu.so.24.0 00:08:53.221 [246/265] Linking target lib/librte_mempool.so.24.0 00:08:53.478 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:08:53.478 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:08:53.478 [249/265] Linking target lib/librte_mbuf.so.24.0 00:08:53.478 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:08:53.737 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:08:53.737 [252/265] Linking target lib/librte_compressdev.so.24.0 00:08:53.737 [253/265] Linking target lib/librte_net.so.24.0 00:08:53.737 [254/265] Linking target lib/librte_reorder.so.24.0 00:08:53.737 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:08:53.995 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:08:53.995 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:08:53.995 [258/265] Linking target lib/librte_cmdline.so.24.0 00:08:53.995 [259/265] Linking target lib/librte_ethdev.so.24.0 00:08:53.995 [260/265] Linking target lib/librte_hash.so.24.0 00:08:53.995 [261/265] Linking target lib/librte_security.so.24.0 00:08:54.254 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:08:54.254 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:08:54.254 [264/265] Linking target lib/librte_power.so.24.0 00:08:54.512 [265/265] Linking target lib/librte_vhost.so.24.0 00:08:54.512 INFO: autodetecting backend as ninja 00:08:54.512 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:55.887 CC lib/log/log_flags.o 00:08:55.887 CC lib/log/log.o 00:08:55.887 CC lib/ut_mock/mock.o 00:08:55.887 CC lib/log/log_deprecated.o 00:08:55.887 CC lib/ut/ut.o 00:08:55.887 LIB libspdk_ut_mock.a 00:08:55.887 LIB libspdk_ut.a 00:08:55.887 LIB libspdk_log.a 00:08:55.887 SO libspdk_ut.so.2.0 00:08:55.887 SO libspdk_ut_mock.so.6.0 00:08:55.887 SO libspdk_log.so.7.0 00:08:55.887 SYMLINK libspdk_ut_mock.so 00:08:55.887 SYMLINK libspdk_ut.so 00:08:56.144 SYMLINK libspdk_log.so 00:08:56.430 CC lib/ioat/ioat.o 00:08:56.430 CXX lib/trace_parser/trace.o 00:08:56.430 CC lib/dma/dma.o 00:08:56.430 CC lib/util/base64.o 00:08:56.430 CC lib/util/bit_array.o 00:08:56.430 CC lib/util/cpuset.o 00:08:56.430 CC lib/util/crc16.o 00:08:56.430 CC lib/util/crc32c.o 00:08:56.430 CC lib/util/crc32.o 00:08:56.430 CC lib/vfio_user/host/vfio_user_pci.o 00:08:56.717 CC lib/vfio_user/host/vfio_user.o 00:08:56.717 CC lib/util/crc32_ieee.o 00:08:56.717 LIB libspdk_dma.a 00:08:56.717 CC lib/util/crc64.o 00:08:56.717 CC lib/util/dif.o 00:08:56.717 CC lib/util/fd.o 00:08:56.717 SO libspdk_dma.so.4.0 00:08:56.717 CC lib/util/file.o 00:08:56.717 SYMLINK libspdk_dma.so 00:08:56.717 CC lib/util/hexlify.o 00:08:56.717 CC lib/util/iov.o 00:08:56.717 CC lib/util/math.o 00:08:56.717 LIB libspdk_ioat.a 00:08:56.717 CC lib/util/pipe.o 00:08:56.717 CC lib/util/strerror_tls.o 00:08:56.975 SO libspdk_ioat.so.7.0 00:08:56.975 CC lib/util/string.o 00:08:56.975 CC lib/util/uuid.o 00:08:56.975 LIB libspdk_vfio_user.a 00:08:56.975 SYMLINK libspdk_ioat.so 00:08:56.975 CC lib/util/fd_group.o 00:08:56.975 CC lib/util/xor.o 00:08:56.976 CC lib/util/zipf.o 00:08:56.976 SO libspdk_vfio_user.so.5.0 00:08:56.976 SYMLINK libspdk_vfio_user.so 00:08:57.233 LIB libspdk_util.a 00:08:57.233 SO libspdk_util.so.9.0 00:08:57.490 LIB libspdk_trace_parser.a 00:08:57.490 SO libspdk_trace_parser.so.5.0 00:08:57.490 SYMLINK libspdk_util.so 00:08:57.748 SYMLINK libspdk_trace_parser.so 00:08:57.748 CC lib/idxd/idxd.o 00:08:57.748 CC lib/idxd/idxd_user.o 00:08:57.748 CC lib/json/json_parse.o 00:08:57.748 CC lib/env_dpdk/memory.o 00:08:57.748 CC lib/env_dpdk/env.o 00:08:57.748 CC lib/env_dpdk/pci.o 00:08:57.748 CC lib/json/json_util.o 00:08:57.748 CC lib/vmd/vmd.o 00:08:57.748 CC lib/rdma/common.o 00:08:57.748 CC lib/conf/conf.o 00:08:58.004 CC lib/rdma/rdma_verbs.o 00:08:58.004 LIB libspdk_conf.a 00:08:58.004 CC lib/json/json_write.o 00:08:58.004 CC lib/env_dpdk/init.o 00:08:58.004 SO libspdk_conf.so.6.0 00:08:58.004 CC lib/vmd/led.o 00:08:58.004 SYMLINK libspdk_conf.so 00:08:58.004 CC lib/env_dpdk/threads.o 00:08:58.004 CC lib/env_dpdk/pci_ioat.o 00:08:58.262 LIB libspdk_rdma.a 00:08:58.262 CC lib/env_dpdk/pci_virtio.o 00:08:58.262 CC lib/env_dpdk/pci_vmd.o 00:08:58.262 SO libspdk_rdma.so.6.0 00:08:58.262 CC lib/env_dpdk/pci_idxd.o 00:08:58.262 LIB libspdk_json.a 00:08:58.262 LIB libspdk_idxd.a 00:08:58.520 SO libspdk_json.so.6.0 00:08:58.520 SYMLINK libspdk_rdma.so 00:08:58.520 SO libspdk_idxd.so.12.0 00:08:58.520 CC lib/env_dpdk/pci_event.o 00:08:58.520 CC lib/env_dpdk/sigbus_handler.o 00:08:58.520 LIB libspdk_vmd.a 00:08:58.520 SYMLINK libspdk_json.so 00:08:58.520 CC lib/env_dpdk/pci_dpdk.o 00:08:58.520 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:58.520 SYMLINK libspdk_idxd.so 00:08:58.520 SO libspdk_vmd.so.6.0 00:08:58.520 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:58.520 SYMLINK libspdk_vmd.so 00:08:58.777 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:58.777 CC lib/jsonrpc/jsonrpc_server.o 00:08:58.777 CC lib/jsonrpc/jsonrpc_client.o 00:08:58.777 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:59.035 LIB libspdk_jsonrpc.a 00:08:59.293 SO libspdk_jsonrpc.so.6.0 00:08:59.293 SYMLINK libspdk_jsonrpc.so 00:08:59.552 LIB libspdk_env_dpdk.a 00:08:59.552 SO libspdk_env_dpdk.so.14.0 00:08:59.552 CC lib/rpc/rpc.o 00:08:59.810 SYMLINK libspdk_env_dpdk.so 00:08:59.810 LIB libspdk_rpc.a 00:08:59.810 SO libspdk_rpc.so.6.0 00:09:00.068 SYMLINK libspdk_rpc.so 00:09:00.327 CC lib/keyring/keyring_rpc.o 00:09:00.327 CC lib/keyring/keyring.o 00:09:00.327 CC lib/notify/notify_rpc.o 00:09:00.327 CC lib/trace/trace_flags.o 00:09:00.327 CC lib/notify/notify.o 00:09:00.327 CC lib/trace/trace.o 00:09:00.327 CC lib/trace/trace_rpc.o 00:09:00.585 LIB libspdk_notify.a 00:09:00.585 LIB libspdk_keyring.a 00:09:00.585 SO libspdk_notify.so.6.0 00:09:00.585 SO libspdk_keyring.so.1.0 00:09:00.585 LIB libspdk_trace.a 00:09:00.585 SYMLINK libspdk_notify.so 00:09:00.844 SYMLINK libspdk_keyring.so 00:09:00.844 SO libspdk_trace.so.10.0 00:09:00.844 SYMLINK libspdk_trace.so 00:09:01.102 CC lib/thread/iobuf.o 00:09:01.102 CC lib/thread/thread.o 00:09:01.102 CC lib/sock/sock.o 00:09:01.102 CC lib/sock/sock_rpc.o 00:09:01.676 LIB libspdk_sock.a 00:09:01.676 SO libspdk_sock.so.9.0 00:09:01.676 SYMLINK libspdk_sock.so 00:09:02.241 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:02.241 CC lib/nvme/nvme_ctrlr.o 00:09:02.241 CC lib/nvme/nvme_ns_cmd.o 00:09:02.241 CC lib/nvme/nvme_ns.o 00:09:02.241 CC lib/nvme/nvme_fabric.o 00:09:02.241 CC lib/nvme/nvme_qpair.o 00:09:02.241 CC lib/nvme/nvme_pcie.o 00:09:02.241 CC lib/nvme/nvme_pcie_common.o 00:09:02.241 CC lib/nvme/nvme.o 00:09:02.839 LIB libspdk_thread.a 00:09:02.839 CC lib/nvme/nvme_quirks.o 00:09:02.839 SO libspdk_thread.so.10.0 00:09:02.839 CC lib/nvme/nvme_transport.o 00:09:03.096 SYMLINK libspdk_thread.so 00:09:03.096 CC lib/nvme/nvme_discovery.o 00:09:03.096 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:03.354 CC lib/accel/accel.o 00:09:03.354 CC lib/blob/blobstore.o 00:09:03.354 CC lib/init/json_config.o 00:09:03.354 CC lib/init/subsystem.o 00:09:03.639 CC lib/virtio/virtio.o 00:09:03.639 CC lib/virtio/virtio_vhost_user.o 00:09:03.639 CC lib/virtio/virtio_vfio_user.o 00:09:03.639 CC lib/init/subsystem_rpc.o 00:09:03.897 CC lib/virtio/virtio_pci.o 00:09:03.897 CC lib/blob/request.o 00:09:03.897 CC lib/blob/zeroes.o 00:09:03.897 CC lib/init/rpc.o 00:09:03.897 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:03.897 CC lib/accel/accel_rpc.o 00:09:03.897 CC lib/accel/accel_sw.o 00:09:04.154 CC lib/blob/blob_bs_dev.o 00:09:04.154 LIB libspdk_init.a 00:09:04.154 LIB libspdk_virtio.a 00:09:04.154 CC lib/nvme/nvme_tcp.o 00:09:04.154 SO libspdk_init.so.5.0 00:09:04.154 CC lib/nvme/nvme_opal.o 00:09:04.154 SO libspdk_virtio.so.7.0 00:09:04.154 SYMLINK libspdk_init.so 00:09:04.154 SYMLINK libspdk_virtio.so 00:09:04.154 CC lib/nvme/nvme_io_msg.o 00:09:04.154 CC lib/nvme/nvme_poll_group.o 00:09:04.412 CC lib/nvme/nvme_zns.o 00:09:04.412 LIB libspdk_accel.a 00:09:04.412 SO libspdk_accel.so.15.0 00:09:04.412 CC lib/event/app.o 00:09:04.670 CC lib/nvme/nvme_stubs.o 00:09:04.670 CC lib/nvme/nvme_auth.o 00:09:04.670 SYMLINK libspdk_accel.so 00:09:04.670 CC lib/nvme/nvme_cuse.o 00:09:04.927 CC lib/nvme/nvme_rdma.o 00:09:04.927 CC lib/event/reactor.o 00:09:04.927 CC lib/event/log_rpc.o 00:09:04.927 CC lib/event/app_rpc.o 00:09:04.927 CC lib/bdev/bdev.o 00:09:05.186 CC lib/event/scheduler_static.o 00:09:05.186 CC lib/bdev/bdev_rpc.o 00:09:05.186 CC lib/bdev/bdev_zone.o 00:09:05.444 CC lib/bdev/part.o 00:09:05.444 LIB libspdk_event.a 00:09:05.444 SO libspdk_event.so.13.0 00:09:05.444 CC lib/bdev/scsi_nvme.o 00:09:05.444 SYMLINK libspdk_event.so 00:09:06.379 LIB libspdk_nvme.a 00:09:06.663 SO libspdk_nvme.so.13.0 00:09:06.921 LIB libspdk_blob.a 00:09:06.921 SO libspdk_blob.so.11.0 00:09:06.921 SYMLINK libspdk_nvme.so 00:09:07.178 SYMLINK libspdk_blob.so 00:09:07.435 CC lib/blobfs/blobfs.o 00:09:07.435 CC lib/blobfs/tree.o 00:09:07.435 CC lib/lvol/lvol.o 00:09:08.001 LIB libspdk_bdev.a 00:09:08.001 SO libspdk_bdev.so.15.0 00:09:08.001 SYMLINK libspdk_bdev.so 00:09:08.259 LIB libspdk_blobfs.a 00:09:08.259 SO libspdk_blobfs.so.10.0 00:09:08.259 CC lib/nbd/nbd.o 00:09:08.259 CC lib/nbd/nbd_rpc.o 00:09:08.259 CC lib/ftl/ftl_core.o 00:09:08.259 CC lib/ftl/ftl_init.o 00:09:08.259 CC lib/scsi/dev.o 00:09:08.259 CC lib/scsi/lun.o 00:09:08.259 CC lib/ublk/ublk.o 00:09:08.259 CC lib/nvmf/ctrlr.o 00:09:08.516 SYMLINK libspdk_blobfs.so 00:09:08.516 CC lib/ublk/ublk_rpc.o 00:09:08.516 LIB libspdk_lvol.a 00:09:08.516 SO libspdk_lvol.so.10.0 00:09:08.516 CC lib/ftl/ftl_layout.o 00:09:08.774 SYMLINK libspdk_lvol.so 00:09:08.774 CC lib/ftl/ftl_debug.o 00:09:08.774 CC lib/ftl/ftl_io.o 00:09:08.774 CC lib/ftl/ftl_sb.o 00:09:08.774 CC lib/scsi/port.o 00:09:08.774 CC lib/scsi/scsi.o 00:09:08.774 LIB libspdk_nbd.a 00:09:08.774 CC lib/scsi/scsi_bdev.o 00:09:08.774 SO libspdk_nbd.so.7.0 00:09:09.032 CC lib/scsi/scsi_pr.o 00:09:09.032 SYMLINK libspdk_nbd.so 00:09:09.032 CC lib/ftl/ftl_l2p.o 00:09:09.032 CC lib/ftl/ftl_l2p_flat.o 00:09:09.032 CC lib/ftl/ftl_nv_cache.o 00:09:09.032 CC lib/ftl/ftl_band.o 00:09:09.032 CC lib/ftl/ftl_band_ops.o 00:09:09.032 CC lib/ftl/ftl_writer.o 00:09:09.032 LIB libspdk_ublk.a 00:09:09.291 SO libspdk_ublk.so.3.0 00:09:09.291 CC lib/ftl/ftl_rq.o 00:09:09.291 SYMLINK libspdk_ublk.so 00:09:09.291 CC lib/scsi/scsi_rpc.o 00:09:09.291 CC lib/ftl/ftl_reloc.o 00:09:09.291 CC lib/ftl/ftl_l2p_cache.o 00:09:09.291 CC lib/scsi/task.o 00:09:09.291 CC lib/nvmf/ctrlr_discovery.o 00:09:09.548 CC lib/nvmf/ctrlr_bdev.o 00:09:09.548 CC lib/nvmf/subsystem.o 00:09:09.548 CC lib/nvmf/nvmf.o 00:09:09.548 CC lib/nvmf/nvmf_rpc.o 00:09:09.548 LIB libspdk_scsi.a 00:09:09.807 CC lib/ftl/ftl_p2l.o 00:09:09.807 SO libspdk_scsi.so.9.0 00:09:09.807 SYMLINK libspdk_scsi.so 00:09:09.807 CC lib/ftl/mngt/ftl_mngt.o 00:09:10.065 CC lib/nvmf/transport.o 00:09:10.065 CC lib/nvmf/tcp.o 00:09:10.323 CC lib/nvmf/stubs.o 00:09:10.323 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:10.323 CC lib/iscsi/conn.o 00:09:10.323 CC lib/vhost/vhost.o 00:09:10.323 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:10.580 CC lib/nvmf/mdns_server.o 00:09:10.580 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:10.581 CC lib/vhost/vhost_rpc.o 00:09:10.581 CC lib/nvmf/rdma.o 00:09:10.839 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:10.839 CC lib/vhost/vhost_scsi.o 00:09:10.839 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:10.839 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:11.096 CC lib/iscsi/init_grp.o 00:09:11.096 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:11.096 CC lib/nvmf/auth.o 00:09:11.096 CC lib/vhost/vhost_blk.o 00:09:11.096 CC lib/vhost/rte_vhost_user.o 00:09:11.096 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:11.354 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:11.354 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:11.354 CC lib/iscsi/iscsi.o 00:09:11.610 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:11.610 CC lib/iscsi/md5.o 00:09:11.610 CC lib/iscsi/param.o 00:09:11.867 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:11.867 CC lib/ftl/utils/ftl_conf.o 00:09:12.152 CC lib/ftl/utils/ftl_md.o 00:09:12.152 CC lib/ftl/utils/ftl_mempool.o 00:09:12.152 CC lib/iscsi/portal_grp.o 00:09:12.152 CC lib/iscsi/tgt_node.o 00:09:12.152 CC lib/ftl/utils/ftl_bitmap.o 00:09:12.152 CC lib/ftl/utils/ftl_property.o 00:09:12.152 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:12.408 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:12.408 LIB libspdk_vhost.a 00:09:12.408 CC lib/iscsi/iscsi_subsystem.o 00:09:12.408 SO libspdk_vhost.so.8.0 00:09:12.408 CC lib/iscsi/iscsi_rpc.o 00:09:12.408 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:12.408 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:12.664 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:12.664 CC lib/iscsi/task.o 00:09:12.664 SYMLINK libspdk_vhost.so 00:09:12.664 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:12.664 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:12.922 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:12.922 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:12.922 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:12.922 CC lib/ftl/base/ftl_base_dev.o 00:09:12.922 CC lib/ftl/base/ftl_base_bdev.o 00:09:12.922 CC lib/ftl/ftl_trace.o 00:09:13.180 LIB libspdk_iscsi.a 00:09:13.180 LIB libspdk_nvmf.a 00:09:13.180 SO libspdk_iscsi.so.8.0 00:09:13.180 LIB libspdk_ftl.a 00:09:13.180 SO libspdk_nvmf.so.18.0 00:09:13.437 SYMLINK libspdk_iscsi.so 00:09:13.437 SYMLINK libspdk_nvmf.so 00:09:13.437 SO libspdk_ftl.so.9.0 00:09:14.002 SYMLINK libspdk_ftl.so 00:09:14.260 CC module/env_dpdk/env_dpdk_rpc.o 00:09:14.519 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:14.519 CC module/scheduler/gscheduler/gscheduler.o 00:09:14.519 CC module/accel/error/accel_error.o 00:09:14.519 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:14.519 CC module/sock/posix/posix.o 00:09:14.519 CC module/keyring/file/keyring.o 00:09:14.519 CC module/blob/bdev/blob_bdev.o 00:09:14.519 CC module/accel/dsa/accel_dsa.o 00:09:14.519 CC module/accel/ioat/accel_ioat.o 00:09:14.519 LIB libspdk_env_dpdk_rpc.a 00:09:14.519 SO libspdk_env_dpdk_rpc.so.6.0 00:09:14.519 CC module/keyring/file/keyring_rpc.o 00:09:14.519 CC module/accel/error/accel_error_rpc.o 00:09:14.519 LIB libspdk_scheduler_dynamic.a 00:09:14.833 SYMLINK libspdk_env_dpdk_rpc.so 00:09:14.833 LIB libspdk_scheduler_dpdk_governor.a 00:09:14.833 CC module/accel/dsa/accel_dsa_rpc.o 00:09:14.833 SO libspdk_scheduler_dynamic.so.4.0 00:09:14.833 LIB libspdk_scheduler_gscheduler.a 00:09:14.833 CC module/accel/ioat/accel_ioat_rpc.o 00:09:14.833 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:14.833 SO libspdk_scheduler_gscheduler.so.4.0 00:09:14.833 SYMLINK libspdk_scheduler_dynamic.so 00:09:14.833 LIB libspdk_blob_bdev.a 00:09:14.833 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:14.833 LIB libspdk_keyring_file.a 00:09:14.833 SYMLINK libspdk_scheduler_gscheduler.so 00:09:14.833 SO libspdk_blob_bdev.so.11.0 00:09:14.833 LIB libspdk_accel_dsa.a 00:09:14.833 SO libspdk_keyring_file.so.1.0 00:09:14.833 LIB libspdk_accel_error.a 00:09:14.833 SO libspdk_accel_dsa.so.5.0 00:09:14.833 SYMLINK libspdk_blob_bdev.so 00:09:14.833 LIB libspdk_accel_ioat.a 00:09:14.833 SYMLINK libspdk_keyring_file.so 00:09:14.833 SO libspdk_accel_error.so.2.0 00:09:15.093 SO libspdk_accel_ioat.so.6.0 00:09:15.093 SYMLINK libspdk_accel_dsa.so 00:09:15.093 SYMLINK libspdk_accel_error.so 00:09:15.093 CC module/accel/iaa/accel_iaa.o 00:09:15.093 CC module/accel/iaa/accel_iaa_rpc.o 00:09:15.093 SYMLINK libspdk_accel_ioat.so 00:09:15.353 CC module/blobfs/bdev/blobfs_bdev.o 00:09:15.353 CC module/bdev/malloc/bdev_malloc.o 00:09:15.353 CC module/bdev/delay/vbdev_delay.o 00:09:15.353 CC module/bdev/error/vbdev_error.o 00:09:15.353 LIB libspdk_accel_iaa.a 00:09:15.353 CC module/bdev/null/bdev_null.o 00:09:15.353 CC module/bdev/gpt/gpt.o 00:09:15.353 CC module/bdev/lvol/vbdev_lvol.o 00:09:15.353 SO libspdk_accel_iaa.so.3.0 00:09:15.353 LIB libspdk_sock_posix.a 00:09:15.353 SYMLINK libspdk_accel_iaa.so 00:09:15.353 CC module/bdev/null/bdev_null_rpc.o 00:09:15.353 CC module/bdev/nvme/bdev_nvme.o 00:09:15.353 SO libspdk_sock_posix.so.6.0 00:09:15.613 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:15.613 SYMLINK libspdk_sock_posix.so 00:09:15.613 CC module/bdev/gpt/vbdev_gpt.o 00:09:15.613 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:15.613 CC module/bdev/error/vbdev_error_rpc.o 00:09:15.613 CC module/bdev/nvme/nvme_rpc.o 00:09:15.613 LIB libspdk_bdev_null.a 00:09:15.613 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:15.613 LIB libspdk_blobfs_bdev.a 00:09:15.613 SO libspdk_bdev_null.so.6.0 00:09:15.871 SO libspdk_blobfs_bdev.so.6.0 00:09:15.871 LIB libspdk_bdev_error.a 00:09:15.871 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:15.871 SYMLINK libspdk_bdev_null.so 00:09:15.871 SO libspdk_bdev_error.so.6.0 00:09:15.871 SYMLINK libspdk_blobfs_bdev.so 00:09:15.871 CC module/bdev/nvme/bdev_mdns_client.o 00:09:15.871 LIB libspdk_bdev_delay.a 00:09:15.871 LIB libspdk_bdev_gpt.a 00:09:15.871 SYMLINK libspdk_bdev_error.so 00:09:15.871 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:15.871 SO libspdk_bdev_gpt.so.6.0 00:09:15.871 SO libspdk_bdev_delay.so.6.0 00:09:16.130 LIB libspdk_bdev_malloc.a 00:09:16.130 SYMLINK libspdk_bdev_gpt.so 00:09:16.130 SYMLINK libspdk_bdev_delay.so 00:09:16.130 SO libspdk_bdev_malloc.so.6.0 00:09:16.130 CC module/bdev/passthru/vbdev_passthru.o 00:09:16.130 SYMLINK libspdk_bdev_malloc.so 00:09:16.130 CC module/bdev/raid/bdev_raid.o 00:09:16.389 CC module/bdev/nvme/vbdev_opal.o 00:09:16.389 CC module/bdev/split/vbdev_split.o 00:09:16.389 CC module/bdev/split/vbdev_split_rpc.o 00:09:16.389 CC module/bdev/aio/bdev_aio.o 00:09:16.389 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:16.389 LIB libspdk_bdev_lvol.a 00:09:16.389 SO libspdk_bdev_lvol.so.6.0 00:09:16.389 CC module/bdev/ftl/bdev_ftl.o 00:09:16.647 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:16.647 SYMLINK libspdk_bdev_lvol.so 00:09:16.647 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:16.647 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:16.647 LIB libspdk_bdev_split.a 00:09:16.647 CC module/bdev/aio/bdev_aio_rpc.o 00:09:16.647 SO libspdk_bdev_split.so.6.0 00:09:16.647 LIB libspdk_bdev_passthru.a 00:09:16.905 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:16.905 SYMLINK libspdk_bdev_split.so 00:09:16.905 SO libspdk_bdev_passthru.so.6.0 00:09:16.905 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:16.905 CC module/bdev/raid/bdev_raid_rpc.o 00:09:16.905 SYMLINK libspdk_bdev_passthru.so 00:09:16.905 CC module/bdev/raid/bdev_raid_sb.o 00:09:16.905 LIB libspdk_bdev_aio.a 00:09:16.905 LIB libspdk_bdev_ftl.a 00:09:16.905 SO libspdk_bdev_aio.so.6.0 00:09:17.163 SO libspdk_bdev_ftl.so.6.0 00:09:17.163 CC module/bdev/iscsi/bdev_iscsi.o 00:09:17.163 LIB libspdk_bdev_zone_block.a 00:09:17.163 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:17.163 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:17.163 SO libspdk_bdev_zone_block.so.6.0 00:09:17.163 SYMLINK libspdk_bdev_ftl.so 00:09:17.163 SYMLINK libspdk_bdev_aio.so 00:09:17.163 CC module/bdev/raid/raid0.o 00:09:17.163 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:17.163 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:17.431 CC module/bdev/raid/raid1.o 00:09:17.431 SYMLINK libspdk_bdev_zone_block.so 00:09:17.431 CC module/bdev/raid/concat.o 00:09:17.695 LIB libspdk_bdev_raid.a 00:09:17.695 SO libspdk_bdev_raid.so.6.0 00:09:17.695 LIB libspdk_bdev_iscsi.a 00:09:17.695 SO libspdk_bdev_iscsi.so.6.0 00:09:17.953 SYMLINK libspdk_bdev_raid.so 00:09:17.953 SYMLINK libspdk_bdev_iscsi.so 00:09:17.953 LIB libspdk_bdev_virtio.a 00:09:17.953 SO libspdk_bdev_virtio.so.6.0 00:09:17.953 SYMLINK libspdk_bdev_virtio.so 00:09:18.211 LIB libspdk_bdev_nvme.a 00:09:18.211 SO libspdk_bdev_nvme.so.7.0 00:09:18.468 SYMLINK libspdk_bdev_nvme.so 00:09:19.034 CC module/event/subsystems/iobuf/iobuf.o 00:09:19.034 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:19.034 CC module/event/subsystems/keyring/keyring.o 00:09:19.034 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:19.034 CC module/event/subsystems/scheduler/scheduler.o 00:09:19.034 CC module/event/subsystems/vmd/vmd.o 00:09:19.034 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:19.034 CC module/event/subsystems/sock/sock.o 00:09:19.291 LIB libspdk_event_keyring.a 00:09:19.291 LIB libspdk_event_sock.a 00:09:19.291 LIB libspdk_event_scheduler.a 00:09:19.291 SO libspdk_event_keyring.so.1.0 00:09:19.291 SO libspdk_event_sock.so.5.0 00:09:19.291 LIB libspdk_event_iobuf.a 00:09:19.291 LIB libspdk_event_vhost_blk.a 00:09:19.291 SO libspdk_event_scheduler.so.4.0 00:09:19.291 LIB libspdk_event_vmd.a 00:09:19.291 SO libspdk_event_vhost_blk.so.3.0 00:09:19.291 SYMLINK libspdk_event_sock.so 00:09:19.291 SYMLINK libspdk_event_keyring.so 00:09:19.291 SO libspdk_event_iobuf.so.3.0 00:09:19.291 SO libspdk_event_vmd.so.6.0 00:09:19.291 SYMLINK libspdk_event_scheduler.so 00:09:19.549 SYMLINK libspdk_event_vhost_blk.so 00:09:19.549 SYMLINK libspdk_event_iobuf.so 00:09:19.549 SYMLINK libspdk_event_vmd.so 00:09:19.807 CC module/event/subsystems/accel/accel.o 00:09:20.065 LIB libspdk_event_accel.a 00:09:20.065 SO libspdk_event_accel.so.6.0 00:09:20.065 SYMLINK libspdk_event_accel.so 00:09:20.633 CC module/event/subsystems/bdev/bdev.o 00:09:20.633 LIB libspdk_event_bdev.a 00:09:20.892 SO libspdk_event_bdev.so.6.0 00:09:20.892 SYMLINK libspdk_event_bdev.so 00:09:21.150 CC module/event/subsystems/scsi/scsi.o 00:09:21.150 CC module/event/subsystems/ublk/ublk.o 00:09:21.150 CC module/event/subsystems/nbd/nbd.o 00:09:21.150 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:21.150 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:21.407 LIB libspdk_event_ublk.a 00:09:21.407 LIB libspdk_event_nbd.a 00:09:21.407 SO libspdk_event_ublk.so.3.0 00:09:21.407 LIB libspdk_event_scsi.a 00:09:21.407 SO libspdk_event_nbd.so.6.0 00:09:21.407 SO libspdk_event_scsi.so.6.0 00:09:21.407 SYMLINK libspdk_event_ublk.so 00:09:21.407 SYMLINK libspdk_event_nbd.so 00:09:21.407 LIB libspdk_event_nvmf.a 00:09:21.407 SYMLINK libspdk_event_scsi.so 00:09:21.407 SO libspdk_event_nvmf.so.6.0 00:09:21.664 SYMLINK libspdk_event_nvmf.so 00:09:21.922 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:21.922 CC module/event/subsystems/iscsi/iscsi.o 00:09:21.922 LIB libspdk_event_vhost_scsi.a 00:09:22.180 LIB libspdk_event_iscsi.a 00:09:22.180 SO libspdk_event_vhost_scsi.so.3.0 00:09:22.180 SO libspdk_event_iscsi.so.6.0 00:09:22.180 SYMLINK libspdk_event_vhost_scsi.so 00:09:22.180 SYMLINK libspdk_event_iscsi.so 00:09:22.489 SO libspdk.so.6.0 00:09:22.489 SYMLINK libspdk.so 00:09:22.762 CC app/spdk_lspci/spdk_lspci.o 00:09:22.762 CC app/trace_record/trace_record.o 00:09:22.762 CC app/spdk_nvme_perf/perf.o 00:09:22.762 CXX app/trace/trace.o 00:09:22.762 CC app/spdk_nvme_identify/identify.o 00:09:22.762 CC app/nvmf_tgt/nvmf_main.o 00:09:22.762 CC app/iscsi_tgt/iscsi_tgt.o 00:09:23.021 CC app/spdk_tgt/spdk_tgt.o 00:09:23.021 CC examples/accel/perf/accel_perf.o 00:09:23.021 CC test/accel/dif/dif.o 00:09:23.021 LINK spdk_lspci 00:09:23.021 LINK nvmf_tgt 00:09:23.021 LINK spdk_trace_record 00:09:23.279 LINK iscsi_tgt 00:09:23.279 LINK spdk_tgt 00:09:23.279 LINK spdk_trace 00:09:23.537 LINK dif 00:09:23.537 LINK accel_perf 00:09:23.795 CC app/spdk_nvme_discover/discovery_aer.o 00:09:23.795 LINK spdk_nvme_identify 00:09:23.795 CC app/spdk_top/spdk_top.o 00:09:23.795 CC test/bdev/bdevio/bdevio.o 00:09:23.795 CC test/app/bdev_svc/bdev_svc.o 00:09:23.795 LINK spdk_nvme_perf 00:09:24.053 TEST_HEADER include/spdk/accel.h 00:09:24.053 TEST_HEADER include/spdk/accel_module.h 00:09:24.053 TEST_HEADER include/spdk/assert.h 00:09:24.053 TEST_HEADER include/spdk/barrier.h 00:09:24.053 TEST_HEADER include/spdk/base64.h 00:09:24.053 TEST_HEADER include/spdk/bdev.h 00:09:24.053 TEST_HEADER include/spdk/bdev_module.h 00:09:24.053 CC test/blobfs/mkfs/mkfs.o 00:09:24.053 TEST_HEADER include/spdk/bdev_zone.h 00:09:24.053 TEST_HEADER include/spdk/bit_array.h 00:09:24.053 TEST_HEADER include/spdk/bit_pool.h 00:09:24.053 TEST_HEADER include/spdk/blob_bdev.h 00:09:24.053 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:24.053 TEST_HEADER include/spdk/blobfs.h 00:09:24.053 TEST_HEADER include/spdk/blob.h 00:09:24.053 TEST_HEADER include/spdk/conf.h 00:09:24.053 TEST_HEADER include/spdk/config.h 00:09:24.053 TEST_HEADER include/spdk/cpuset.h 00:09:24.053 TEST_HEADER include/spdk/crc16.h 00:09:24.053 TEST_HEADER include/spdk/crc32.h 00:09:24.053 LINK spdk_nvme_discover 00:09:24.053 TEST_HEADER include/spdk/crc64.h 00:09:24.053 TEST_HEADER include/spdk/dif.h 00:09:24.053 TEST_HEADER include/spdk/dma.h 00:09:24.053 TEST_HEADER include/spdk/endian.h 00:09:24.053 TEST_HEADER include/spdk/env_dpdk.h 00:09:24.053 TEST_HEADER include/spdk/env.h 00:09:24.053 TEST_HEADER include/spdk/event.h 00:09:24.053 TEST_HEADER include/spdk/fd_group.h 00:09:24.053 TEST_HEADER include/spdk/fd.h 00:09:24.053 TEST_HEADER include/spdk/file.h 00:09:24.053 TEST_HEADER include/spdk/ftl.h 00:09:24.053 TEST_HEADER include/spdk/gpt_spec.h 00:09:24.053 LINK bdev_svc 00:09:24.053 TEST_HEADER include/spdk/hexlify.h 00:09:24.053 TEST_HEADER include/spdk/histogram_data.h 00:09:24.053 TEST_HEADER include/spdk/idxd.h 00:09:24.053 TEST_HEADER include/spdk/idxd_spec.h 00:09:24.053 TEST_HEADER include/spdk/init.h 00:09:24.053 TEST_HEADER include/spdk/ioat.h 00:09:24.053 TEST_HEADER include/spdk/ioat_spec.h 00:09:24.053 TEST_HEADER include/spdk/iscsi_spec.h 00:09:24.053 TEST_HEADER include/spdk/json.h 00:09:24.053 TEST_HEADER include/spdk/jsonrpc.h 00:09:24.053 TEST_HEADER include/spdk/keyring.h 00:09:24.053 TEST_HEADER include/spdk/keyring_module.h 00:09:24.053 TEST_HEADER include/spdk/likely.h 00:09:24.053 TEST_HEADER include/spdk/log.h 00:09:24.053 TEST_HEADER include/spdk/lvol.h 00:09:24.053 TEST_HEADER include/spdk/memory.h 00:09:24.053 TEST_HEADER include/spdk/mmio.h 00:09:24.053 TEST_HEADER include/spdk/nbd.h 00:09:24.053 TEST_HEADER include/spdk/notify.h 00:09:24.053 TEST_HEADER include/spdk/nvme.h 00:09:24.053 TEST_HEADER include/spdk/nvme_intel.h 00:09:24.312 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:24.312 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:24.312 TEST_HEADER include/spdk/nvme_spec.h 00:09:24.312 TEST_HEADER include/spdk/nvme_zns.h 00:09:24.312 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:24.312 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:24.312 TEST_HEADER include/spdk/nvmf.h 00:09:24.312 TEST_HEADER include/spdk/nvmf_spec.h 00:09:24.312 TEST_HEADER include/spdk/nvmf_transport.h 00:09:24.312 TEST_HEADER include/spdk/opal.h 00:09:24.312 TEST_HEADER include/spdk/opal_spec.h 00:09:24.312 TEST_HEADER include/spdk/pci_ids.h 00:09:24.312 TEST_HEADER include/spdk/pipe.h 00:09:24.312 TEST_HEADER include/spdk/queue.h 00:09:24.312 TEST_HEADER include/spdk/reduce.h 00:09:24.312 TEST_HEADER include/spdk/rpc.h 00:09:24.312 TEST_HEADER include/spdk/scheduler.h 00:09:24.312 TEST_HEADER include/spdk/scsi.h 00:09:24.312 TEST_HEADER include/spdk/scsi_spec.h 00:09:24.312 TEST_HEADER include/spdk/sock.h 00:09:24.312 TEST_HEADER include/spdk/stdinc.h 00:09:24.312 TEST_HEADER include/spdk/string.h 00:09:24.312 TEST_HEADER include/spdk/thread.h 00:09:24.312 CC test/app/histogram_perf/histogram_perf.o 00:09:24.312 TEST_HEADER include/spdk/trace.h 00:09:24.312 LINK mkfs 00:09:24.312 TEST_HEADER include/spdk/trace_parser.h 00:09:24.312 TEST_HEADER include/spdk/tree.h 00:09:24.312 TEST_HEADER include/spdk/ublk.h 00:09:24.312 TEST_HEADER include/spdk/util.h 00:09:24.312 TEST_HEADER include/spdk/uuid.h 00:09:24.312 TEST_HEADER include/spdk/version.h 00:09:24.312 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:24.312 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:24.312 TEST_HEADER include/spdk/vhost.h 00:09:24.312 TEST_HEADER include/spdk/vmd.h 00:09:24.312 TEST_HEADER include/spdk/xor.h 00:09:24.312 TEST_HEADER include/spdk/zipf.h 00:09:24.312 CXX test/cpp_headers/accel.o 00:09:24.312 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:24.312 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:24.312 CC examples/bdev/hello_world/hello_bdev.o 00:09:24.569 LINK bdevio 00:09:24.569 LINK histogram_perf 00:09:24.569 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:24.569 CXX test/cpp_headers/accel_module.o 00:09:24.827 LINK hello_bdev 00:09:24.827 CC examples/blob/hello_world/hello_blob.o 00:09:24.827 CC examples/bdev/bdevperf/bdevperf.o 00:09:24.827 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:25.086 CXX test/cpp_headers/assert.o 00:09:25.086 LINK nvme_fuzz 00:09:25.086 LINK spdk_top 00:09:25.086 CC examples/blob/cli/blobcli.o 00:09:25.086 CC test/app/jsoncat/jsoncat.o 00:09:25.086 CXX test/cpp_headers/barrier.o 00:09:25.086 LINK hello_blob 00:09:25.344 CXX test/cpp_headers/base64.o 00:09:25.344 LINK jsoncat 00:09:25.602 LINK vhost_fuzz 00:09:25.602 CC app/vhost/vhost.o 00:09:25.602 CC examples/ioat/perf/perf.o 00:09:25.602 CC examples/nvme/hello_world/hello_world.o 00:09:25.602 CXX test/cpp_headers/bdev.o 00:09:25.602 CC examples/nvme/reconnect/reconnect.o 00:09:25.859 LINK vhost 00:09:25.859 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:25.859 LINK ioat_perf 00:09:25.859 CXX test/cpp_headers/bdev_module.o 00:09:25.859 LINK hello_world 00:09:25.859 LINK blobcli 00:09:26.116 CC examples/nvme/arbitration/arbitration.o 00:09:26.116 LINK bdevperf 00:09:26.116 CXX test/cpp_headers/bdev_zone.o 00:09:26.373 CC examples/ioat/verify/verify.o 00:09:26.373 LINK reconnect 00:09:26.373 LINK iscsi_fuzz 00:09:26.373 CXX test/cpp_headers/bit_array.o 00:09:26.373 CC app/spdk_dd/spdk_dd.o 00:09:26.373 LINK arbitration 00:09:26.373 CXX test/cpp_headers/bit_pool.o 00:09:26.630 CC examples/sock/hello_world/hello_sock.o 00:09:26.630 LINK verify 00:09:26.630 CC examples/vmd/lsvmd/lsvmd.o 00:09:26.630 LINK nvme_manage 00:09:26.630 CXX test/cpp_headers/blob_bdev.o 00:09:26.630 CXX test/cpp_headers/blobfs_bdev.o 00:09:26.888 CC app/fio/nvme/fio_plugin.o 00:09:26.888 LINK lsvmd 00:09:26.888 LINK hello_sock 00:09:26.888 CXX test/cpp_headers/blobfs.o 00:09:26.888 CC examples/nvmf/nvmf/nvmf.o 00:09:26.888 LINK spdk_dd 00:09:27.146 CC test/app/stub/stub.o 00:09:27.146 CC examples/nvme/hotplug/hotplug.o 00:09:27.146 CXX test/cpp_headers/blob.o 00:09:27.146 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:27.146 CXX test/cpp_headers/conf.o 00:09:27.146 CC examples/nvme/abort/abort.o 00:09:27.146 CC examples/vmd/led/led.o 00:09:27.404 LINK nvmf 00:09:27.404 CXX test/cpp_headers/config.o 00:09:27.404 LINK stub 00:09:27.404 LINK hotplug 00:09:27.404 LINK cmb_copy 00:09:27.404 CXX test/cpp_headers/cpuset.o 00:09:27.404 LINK led 00:09:27.404 LINK spdk_nvme 00:09:27.404 CC app/fio/bdev/fio_plugin.o 00:09:27.404 CXX test/cpp_headers/crc16.o 00:09:27.668 CXX test/cpp_headers/crc32.o 00:09:27.668 CXX test/cpp_headers/crc64.o 00:09:27.668 LINK abort 00:09:27.668 CXX test/cpp_headers/dif.o 00:09:27.956 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:27.956 CC examples/util/zipf/zipf.o 00:09:27.956 CXX test/cpp_headers/dma.o 00:09:27.956 CXX test/cpp_headers/endian.o 00:09:27.956 CC examples/idxd/perf/perf.o 00:09:27.956 CC examples/thread/thread/thread_ex.o 00:09:27.956 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:27.956 CC test/dma/test_dma/test_dma.o 00:09:27.956 LINK zipf 00:09:28.213 LINK spdk_bdev 00:09:28.213 CXX test/cpp_headers/env_dpdk.o 00:09:28.213 LINK pmr_persistence 00:09:28.213 LINK interrupt_tgt 00:09:28.213 LINK thread 00:09:28.213 LINK idxd_perf 00:09:28.213 CXX test/cpp_headers/env.o 00:09:28.471 CXX test/cpp_headers/event.o 00:09:28.471 CC test/env/vtophys/vtophys.o 00:09:28.471 CXX test/cpp_headers/fd_group.o 00:09:28.471 CXX test/cpp_headers/fd.o 00:09:28.471 LINK test_dma 00:09:28.471 CC test/env/mem_callbacks/mem_callbacks.o 00:09:28.728 CXX test/cpp_headers/file.o 00:09:28.728 LINK vtophys 00:09:28.728 CXX test/cpp_headers/ftl.o 00:09:28.728 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:28.728 CXX test/cpp_headers/gpt_spec.o 00:09:28.728 CC test/env/memory/memory_ut.o 00:09:28.728 CC test/env/pci/pci_ut.o 00:09:28.728 CXX test/cpp_headers/hexlify.o 00:09:28.986 LINK env_dpdk_post_init 00:09:28.986 CC test/rpc_client/rpc_client_test.o 00:09:28.986 CXX test/cpp_headers/histogram_data.o 00:09:28.986 CC test/event/event_perf/event_perf.o 00:09:29.244 LINK pci_ut 00:09:29.244 CC test/nvme/aer/aer.o 00:09:29.244 LINK mem_callbacks 00:09:29.244 LINK rpc_client_test 00:09:29.244 CXX test/cpp_headers/idxd.o 00:09:29.244 LINK event_perf 00:09:29.501 CXX test/cpp_headers/idxd_spec.o 00:09:29.501 CC test/lvol/esnap/esnap.o 00:09:29.501 CC test/event/reactor/reactor.o 00:09:29.758 CXX test/cpp_headers/init.o 00:09:29.758 LINK aer 00:09:29.758 LINK reactor 00:09:29.758 CC test/event/reactor_perf/reactor_perf.o 00:09:29.758 CC test/event/app_repeat/app_repeat.o 00:09:29.758 CC test/nvme/reset/reset.o 00:09:29.758 CC test/thread/poller_perf/poller_perf.o 00:09:29.758 LINK memory_ut 00:09:30.016 LINK reactor_perf 00:09:30.016 LINK app_repeat 00:09:30.016 CXX test/cpp_headers/ioat.o 00:09:30.016 LINK poller_perf 00:09:30.273 CXX test/cpp_headers/ioat_spec.o 00:09:30.273 CC test/nvme/sgl/sgl.o 00:09:30.273 CC test/event/scheduler/scheduler.o 00:09:30.273 LINK reset 00:09:30.563 CC test/nvme/e2edp/nvme_dp.o 00:09:30.563 CC test/nvme/overhead/overhead.o 00:09:30.563 CXX test/cpp_headers/iscsi_spec.o 00:09:30.563 CC test/nvme/startup/startup.o 00:09:30.563 CC test/nvme/err_injection/err_injection.o 00:09:30.563 CXX test/cpp_headers/json.o 00:09:30.563 LINK sgl 00:09:30.842 LINK scheduler 00:09:30.842 LINK startup 00:09:30.842 LINK err_injection 00:09:30.842 CXX test/cpp_headers/jsonrpc.o 00:09:30.842 LINK overhead 00:09:30.842 CXX test/cpp_headers/keyring.o 00:09:30.842 CC test/nvme/reserve/reserve.o 00:09:30.842 LINK nvme_dp 00:09:31.100 CXX test/cpp_headers/keyring_module.o 00:09:31.357 CC test/nvme/simple_copy/simple_copy.o 00:09:31.357 LINK reserve 00:09:31.357 CC test/nvme/boot_partition/boot_partition.o 00:09:31.357 CC test/nvme/connect_stress/connect_stress.o 00:09:31.357 CC test/nvme/compliance/nvme_compliance.o 00:09:31.357 CC test/nvme/fused_ordering/fused_ordering.o 00:09:31.357 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:31.616 CXX test/cpp_headers/likely.o 00:09:31.616 LINK boot_partition 00:09:31.616 LINK simple_copy 00:09:31.874 LINK doorbell_aers 00:09:31.874 CC test/nvme/fdp/fdp.o 00:09:31.874 LINK nvme_compliance 00:09:31.874 LINK fused_ordering 00:09:31.874 CXX test/cpp_headers/log.o 00:09:31.874 CXX test/cpp_headers/lvol.o 00:09:31.874 LINK connect_stress 00:09:32.132 CC test/nvme/cuse/cuse.o 00:09:32.132 CXX test/cpp_headers/memory.o 00:09:32.132 CXX test/cpp_headers/mmio.o 00:09:32.132 CXX test/cpp_headers/nbd.o 00:09:32.132 CXX test/cpp_headers/notify.o 00:09:32.132 CXX test/cpp_headers/nvme.o 00:09:32.132 CXX test/cpp_headers/nvme_intel.o 00:09:32.391 CXX test/cpp_headers/nvme_ocssd.o 00:09:32.391 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:32.391 LINK fdp 00:09:32.391 CXX test/cpp_headers/nvme_spec.o 00:09:32.391 CXX test/cpp_headers/nvme_zns.o 00:09:32.655 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:32.655 CXX test/cpp_headers/nvmf_cmd.o 00:09:32.655 CXX test/cpp_headers/nvmf.o 00:09:32.655 CXX test/cpp_headers/nvmf_spec.o 00:09:32.655 CXX test/cpp_headers/nvmf_transport.o 00:09:32.655 CXX test/cpp_headers/opal.o 00:09:32.655 CXX test/cpp_headers/opal_spec.o 00:09:32.914 CXX test/cpp_headers/pci_ids.o 00:09:32.914 CXX test/cpp_headers/pipe.o 00:09:32.914 CXX test/cpp_headers/queue.o 00:09:32.914 CXX test/cpp_headers/reduce.o 00:09:33.196 CXX test/cpp_headers/rpc.o 00:09:33.196 CXX test/cpp_headers/scheduler.o 00:09:33.196 CXX test/cpp_headers/scsi.o 00:09:33.196 CXX test/cpp_headers/scsi_spec.o 00:09:33.196 CXX test/cpp_headers/sock.o 00:09:33.196 CXX test/cpp_headers/stdinc.o 00:09:33.454 LINK cuse 00:09:33.454 CXX test/cpp_headers/string.o 00:09:33.454 CXX test/cpp_headers/thread.o 00:09:33.454 CXX test/cpp_headers/trace.o 00:09:33.712 CXX test/cpp_headers/trace_parser.o 00:09:33.712 CXX test/cpp_headers/tree.o 00:09:33.712 CXX test/cpp_headers/ublk.o 00:09:33.712 CXX test/cpp_headers/util.o 00:09:33.712 CXX test/cpp_headers/uuid.o 00:09:33.712 CXX test/cpp_headers/version.o 00:09:33.712 CXX test/cpp_headers/vfio_user_pci.o 00:09:34.013 CXX test/cpp_headers/vfio_user_spec.o 00:09:34.013 CXX test/cpp_headers/vhost.o 00:09:34.013 CXX test/cpp_headers/vmd.o 00:09:34.013 CXX test/cpp_headers/xor.o 00:09:34.013 CXX test/cpp_headers/zipf.o 00:09:35.955 LINK esnap 00:09:37.367 00:09:37.367 real 1m25.316s 00:09:37.367 user 8m5.646s 00:09:37.367 sys 2m31.596s 00:09:37.367 09:50:14 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:09:37.367 09:50:14 make -- common/autotest_common.sh@10 -- $ set +x 00:09:37.367 ************************************ 00:09:37.367 END TEST make 00:09:37.367 ************************************ 00:09:37.367 09:50:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:37.367 09:50:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:37.367 09:50:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:37.367 09:50:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:37.367 09:50:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:09:37.367 09:50:14 -- pm/common@44 -- $ pid=5065 00:09:37.367 09:50:14 -- pm/common@50 -- $ kill -TERM 5065 00:09:37.367 09:50:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:37.367 09:50:14 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:09:37.367 09:50:14 -- pm/common@44 -- $ pid=5067 00:09:37.367 09:50:14 -- pm/common@50 -- $ kill -TERM 5067 00:09:37.626 09:50:14 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.626 09:50:14 -- nvmf/common.sh@7 -- # uname -s 00:09:37.626 09:50:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.626 09:50:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.626 09:50:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.626 09:50:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.626 09:50:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.626 09:50:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.626 09:50:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.626 09:50:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.626 09:50:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.626 09:50:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.626 09:50:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:09:37.626 09:50:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:09:37.626 09:50:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.626 09:50:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.626 09:50:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.626 09:50:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.626 09:50:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.626 09:50:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.626 09:50:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.626 09:50:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.626 09:50:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.626 09:50:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.626 09:50:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.626 09:50:14 -- paths/export.sh@5 -- # export PATH 00:09:37.626 09:50:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.626 09:50:14 -- nvmf/common.sh@47 -- # : 0 00:09:37.626 09:50:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:37.626 09:50:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:37.626 09:50:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.626 09:50:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.626 09:50:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.626 09:50:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:37.626 09:50:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:37.626 09:50:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:37.626 09:50:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:37.627 09:50:14 -- spdk/autotest.sh@32 -- # uname -s 00:09:37.627 09:50:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:37.627 09:50:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:37.627 09:50:14 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:37.627 09:50:14 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:09:37.627 09:50:14 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:09:37.627 09:50:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:37.627 09:50:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:37.627 09:50:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:37.627 09:50:14 -- spdk/autotest.sh@48 -- # udevadm_pid=54046 00:09:37.627 09:50:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:37.627 09:50:14 -- pm/common@17 -- # local monitor 00:09:37.627 09:50:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:37.627 09:50:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:37.627 09:50:14 -- pm/common@21 -- # date +%s 00:09:37.627 09:50:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:37.627 09:50:14 -- pm/common@25 -- # sleep 1 00:09:37.627 09:50:14 -- pm/common@21 -- # date +%s 00:09:37.627 09:50:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715766614 00:09:37.627 09:50:14 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715766614 00:09:37.627 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715766614_collect-cpu-load.pm.log 00:09:37.627 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715766614_collect-vmstat.pm.log 00:09:38.559 09:50:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:38.559 09:50:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:38.559 09:50:15 -- common/autotest_common.sh@721 -- # xtrace_disable 00:09:38.559 09:50:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.559 09:50:15 -- spdk/autotest.sh@59 -- # create_test_list 00:09:38.559 09:50:15 -- common/autotest_common.sh@745 -- # xtrace_disable 00:09:38.559 09:50:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.817 09:50:15 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:09:38.817 09:50:15 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:09:38.817 09:50:15 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:09:38.817 09:50:15 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:09:38.817 09:50:15 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:09:38.817 09:50:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:38.817 09:50:15 -- common/autotest_common.sh@1452 -- # uname 00:09:38.817 09:50:15 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:09:38.817 09:50:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:38.817 09:50:15 -- common/autotest_common.sh@1472 -- # uname 00:09:38.817 09:50:15 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:09:38.817 09:50:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:09:38.817 09:50:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:09:38.817 09:50:15 -- spdk/autotest.sh@72 -- # hash lcov 00:09:38.817 09:50:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:09:38.817 09:50:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:09:38.817 --rc lcov_branch_coverage=1 00:09:38.818 --rc lcov_function_coverage=1 00:09:38.818 --rc genhtml_branch_coverage=1 00:09:38.818 --rc genhtml_function_coverage=1 00:09:38.818 --rc genhtml_legend=1 00:09:38.818 --rc geninfo_all_blocks=1 00:09:38.818 ' 00:09:38.818 09:50:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:09:38.818 --rc lcov_branch_coverage=1 00:09:38.818 --rc lcov_function_coverage=1 00:09:38.818 --rc genhtml_branch_coverage=1 00:09:38.818 --rc genhtml_function_coverage=1 00:09:38.818 --rc genhtml_legend=1 00:09:38.818 --rc geninfo_all_blocks=1 00:09:38.818 ' 00:09:38.818 09:50:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:09:38.818 --rc lcov_branch_coverage=1 00:09:38.818 --rc lcov_function_coverage=1 00:09:38.818 --rc genhtml_branch_coverage=1 00:09:38.818 --rc genhtml_function_coverage=1 00:09:38.818 --rc genhtml_legend=1 00:09:38.818 --rc geninfo_all_blocks=1 00:09:38.818 --no-external' 00:09:38.818 09:50:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:09:38.818 --rc lcov_branch_coverage=1 00:09:38.818 --rc lcov_function_coverage=1 00:09:38.818 --rc genhtml_branch_coverage=1 00:09:38.818 --rc genhtml_function_coverage=1 00:09:38.818 --rc genhtml_legend=1 00:09:38.818 --rc geninfo_all_blocks=1 00:09:38.818 --no-external' 00:09:38.818 09:50:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:09:38.818 lcov: LCOV version 1.14 00:09:38.818 09:50:16 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:48.812 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:09:48.812 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:09:48.812 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:09:48.812 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:09:48.812 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:09:48.812 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:09:55.489 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:55.489 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:10:10.421 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:10:10.421 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:10:10.422 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:10:10.422 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:10:10.423 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:10:10.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:10:14.630 09:50:51 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:10:14.630 09:50:51 -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:14.630 09:50:51 -- common/autotest_common.sh@10 -- # set +x 00:10:14.630 09:50:51 -- spdk/autotest.sh@91 -- # rm -f 00:10:14.630 09:50:51 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:14.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:14.630 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:14.630 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:14.630 09:50:51 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:10:14.630 09:50:51 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:10:14.630 09:50:51 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:10:14.630 09:50:51 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:10:14.630 09:50:51 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:14.630 09:50:51 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:10:14.630 09:50:51 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:10:14.630 09:50:51 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:14.630 09:50:51 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:14.630 09:50:51 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:14.630 09:50:51 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:10:14.630 09:50:51 -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:10:14.630 09:50:51 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:14.630 09:50:51 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:14.630 09:50:51 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:14.630 09:50:51 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n2 00:10:14.630 09:50:51 -- common/autotest_common.sh@1659 -- # local device=nvme1n2 00:10:14.630 09:50:51 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:14.630 09:50:51 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:14.630 09:50:51 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:14.630 09:50:51 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n3 00:10:14.630 09:50:51 -- common/autotest_common.sh@1659 -- # local device=nvme1n3 00:10:14.630 09:50:51 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:14.630 09:50:51 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:14.630 09:50:51 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:10:14.630 09:50:51 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:14.630 09:50:51 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:14.630 09:50:51 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:10:14.630 09:50:51 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:10:14.630 09:50:51 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:14.630 No valid GPT data, bailing 00:10:14.630 09:50:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:14.888 09:50:52 -- scripts/common.sh@391 -- # pt= 00:10:14.888 09:50:52 -- scripts/common.sh@392 -- # return 1 00:10:14.888 09:50:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:14.888 1+0 records in 00:10:14.888 1+0 records out 00:10:14.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583895 s, 180 MB/s 00:10:14.888 09:50:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:14.888 09:50:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:14.888 09:50:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:10:14.888 09:50:52 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:10:14.888 09:50:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:14.888 No valid GPT data, bailing 00:10:14.888 09:50:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:14.888 09:50:52 -- scripts/common.sh@391 -- # pt= 00:10:14.888 09:50:52 -- scripts/common.sh@392 -- # return 1 00:10:14.888 09:50:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:14.888 1+0 records in 00:10:14.888 1+0 records out 00:10:14.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00556 s, 189 MB/s 00:10:14.888 09:50:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:14.888 09:50:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:14.888 09:50:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:10:14.888 09:50:52 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:10:14.888 09:50:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:14.888 No valid GPT data, bailing 00:10:14.888 09:50:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:14.888 09:50:52 -- scripts/common.sh@391 -- # pt= 00:10:14.888 09:50:52 -- scripts/common.sh@392 -- # return 1 00:10:14.888 09:50:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:14.888 1+0 records in 00:10:14.888 1+0 records out 00:10:14.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640884 s, 164 MB/s 00:10:14.888 09:50:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:14.888 09:50:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:14.888 09:50:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:10:14.888 09:50:52 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:10:14.888 09:50:52 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:14.888 No valid GPT data, bailing 00:10:15.146 09:50:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:15.146 09:50:52 -- scripts/common.sh@391 -- # pt= 00:10:15.146 09:50:52 -- scripts/common.sh@392 -- # return 1 00:10:15.146 09:50:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:15.146 1+0 records in 00:10:15.146 1+0 records out 00:10:15.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00541742 s, 194 MB/s 00:10:15.146 09:50:52 -- spdk/autotest.sh@118 -- # sync 00:10:15.146 09:50:52 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:15.146 09:50:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:15.146 09:50:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:16.517 09:50:53 -- spdk/autotest.sh@124 -- # uname -s 00:10:16.517 09:50:53 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:10:16.517 09:50:53 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:10:16.517 09:50:53 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:16.517 09:50:53 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:16.517 09:50:53 -- common/autotest_common.sh@10 -- # set +x 00:10:16.517 ************************************ 00:10:16.517 START TEST setup.sh 00:10:16.517 ************************************ 00:10:16.517 09:50:53 setup.sh -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:10:16.517 * Looking for test storage... 00:10:16.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:16.517 09:50:53 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:10:16.517 09:50:53 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:10:16.517 09:50:53 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:10:16.517 09:50:53 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:16.517 09:50:53 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:16.517 09:50:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:16.517 ************************************ 00:10:16.517 START TEST acl 00:10:16.517 ************************************ 00:10:16.517 09:50:53 setup.sh.acl -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:10:16.774 * Looking for test storage... 00:10:16.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:16.774 09:50:53 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n2 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n2 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n3 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n3 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:16.774 09:50:53 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:16.774 09:50:53 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:10:16.774 09:50:53 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:10:16.774 09:50:53 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:10:16.774 09:50:53 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:10:16.774 09:50:53 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:10:16.774 09:50:53 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:16.774 09:50:53 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:17.706 09:50:54 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:10:17.706 09:50:54 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:10:17.706 09:50:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:17.707 09:50:54 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:10:17.707 09:50:54 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:10:17.707 09:50:54 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:18.274 Hugepages 00:10:18.274 node hugesize free / total 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:18.274 00:10:18.274 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:10:18.274 09:50:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:10:18.533 09:50:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:10:18.533 09:50:55 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:18.533 09:50:55 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:18.533 09:50:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:18.533 ************************************ 00:10:18.533 START TEST denied 00:10:18.533 ************************************ 00:10:18.533 09:50:55 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:10:18.533 09:50:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:10:18.533 09:50:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:10:18.533 09:50:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:10:18.533 09:50:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:10:18.533 09:50:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:19.465 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:10:19.465 09:50:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:19.466 09:50:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:20.398 00:10:20.398 real 0m1.617s 00:10:20.398 user 0m0.553s 00:10:20.398 sys 0m1.015s 00:10:20.398 09:50:57 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:20.398 ************************************ 00:10:20.398 END TEST denied 00:10:20.398 ************************************ 00:10:20.398 09:50:57 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:10:20.398 09:50:57 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:10:20.399 09:50:57 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:20.399 09:50:57 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:20.399 09:50:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:20.399 ************************************ 00:10:20.399 START TEST allowed 00:10:20.399 ************************************ 00:10:20.399 09:50:57 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:10:20.399 09:50:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:10:20.399 09:50:57 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:10:20.399 09:50:57 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:10:20.399 09:50:57 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:10:20.399 09:50:57 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:21.330 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.330 09:50:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:10:21.330 09:50:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:10:21.331 09:50:58 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:10:21.331 09:50:58 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:10:21.331 09:50:58 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:10:21.331 09:50:58 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:21.331 09:50:58 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:21.331 09:50:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:10:21.331 09:50:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:21.331 09:50:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:21.896 00:10:21.896 real 0m1.640s 00:10:21.896 user 0m0.703s 00:10:21.896 sys 0m0.938s 00:10:21.896 09:50:59 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:21.896 09:50:59 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:10:21.896 ************************************ 00:10:21.896 END TEST allowed 00:10:21.896 ************************************ 00:10:21.896 00:10:21.896 real 0m5.273s 00:10:21.896 user 0m2.162s 00:10:21.896 sys 0m3.093s 00:10:21.896 09:50:59 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:21.896 09:50:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:21.896 ************************************ 00:10:21.896 END TEST acl 00:10:21.896 ************************************ 00:10:21.896 09:50:59 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:10:21.896 09:50:59 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:21.896 09:50:59 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:21.896 09:50:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:21.896 ************************************ 00:10:21.896 START TEST hugepages 00:10:21.896 ************************************ 00:10:21.896 09:50:59 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:10:22.155 * Looking for test storage... 00:10:22.155 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:22.155 09:50:59 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 5922308 kB' 'MemAvailable: 7432508 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 473608 kB' 'Inactive: 1356120 kB' 'Active(anon): 111860 kB' 'Inactive(anon): 10680 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345440 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 112232 kB' 'Mapped: 49112 kB' 'Shmem: 10484 kB' 'KReclaimable: 79604 kB' 'Slab: 152148 kB' 'SReclaimable: 79604 kB' 'SUnreclaim: 72544 kB' 'KernelStack: 4700 kB' 'PageTables: 3368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12407572 kB' 'Committed_AS: 342964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.156 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:22.157 09:50:59 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:10:22.157 09:50:59 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:22.157 09:50:59 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:22.157 09:50:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:22.157 ************************************ 00:10:22.157 START TEST default_setup 00:10:22.157 ************************************ 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:22.157 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:10:22.158 09:50:59 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:10:22.158 09:50:59 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:10:22.158 09:50:59 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:23.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.094 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:23.094 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8033128 kB' 'MemAvailable: 9543148 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484312 kB' 'Inactive: 1356124 kB' 'Active(anon): 122564 kB' 'Inactive(anon): 10668 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122840 kB' 'Mapped: 48876 kB' 'Shmem: 10468 kB' 'KReclaimable: 79212 kB' 'Slab: 151816 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72604 kB' 'KernelStack: 4704 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.094 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8033344 kB' 'MemAvailable: 9543364 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 483948 kB' 'Inactive: 1356116 kB' 'Active(anon): 122200 kB' 'Inactive(anon): 10660 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122452 kB' 'Mapped: 48764 kB' 'Shmem: 10468 kB' 'KReclaimable: 79212 kB' 'Slab: 151816 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72604 kB' 'KernelStack: 4720 kB' 'PageTables: 3400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.095 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.096 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8033344 kB' 'MemAvailable: 9543364 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 483948 kB' 'Inactive: 1356116 kB' 'Active(anon): 122200 kB' 'Inactive(anon): 10660 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122452 kB' 'Mapped: 48764 kB' 'Shmem: 10468 kB' 'KReclaimable: 79212 kB' 'Slab: 151816 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72604 kB' 'KernelStack: 4720 kB' 'PageTables: 3400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:23.097 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.098 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.099 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:23.360 nr_hugepages=1024 00:10:23.360 resv_hugepages=0 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:23.360 surplus_hugepages=0 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:23.360 anon_hugepages=0 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8033344 kB' 'MemAvailable: 9543364 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484016 kB' 'Inactive: 1356116 kB' 'Active(anon): 122268 kB' 'Inactive(anon): 10660 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122512 kB' 'Mapped: 48764 kB' 'Shmem: 10468 kB' 'KReclaimable: 79212 kB' 'Slab: 151816 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72604 kB' 'KernelStack: 4752 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53280 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.360 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.361 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8033344 kB' 'MemUsed: 4198900 kB' 'SwapCached: 0 kB' 'Active: 484220 kB' 'Inactive: 1356116 kB' 'Active(anon): 122472 kB' 'Inactive(anon): 10660 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1717676 kB' 'Mapped: 48764 kB' 'AnonPages: 122752 kB' 'Shmem: 10468 kB' 'KernelStack: 4800 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79212 kB' 'Slab: 151816 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.362 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:23.363 node0=1024 expecting 1024 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:23.363 00:10:23.363 real 0m1.128s 00:10:23.363 user 0m0.492s 00:10:23.363 sys 0m0.605s 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:23.363 09:51:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:10:23.363 ************************************ 00:10:23.363 END TEST default_setup 00:10:23.363 ************************************ 00:10:23.363 09:51:00 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:10:23.363 09:51:00 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:23.363 09:51:00 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:23.363 09:51:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:23.363 ************************************ 00:10:23.363 START TEST per_node_1G_alloc 00:10:23.363 ************************************ 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:23.363 09:51:00 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:23.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.621 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:23.622 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.887 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9072572 kB' 'MemAvailable: 10582592 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484512 kB' 'Inactive: 1356100 kB' 'Active(anon): 122764 kB' 'Inactive(anon): 10644 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122924 kB' 'Mapped: 48804 kB' 'Shmem: 10468 kB' 'KReclaimable: 79212 kB' 'Slab: 151832 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72620 kB' 'KernelStack: 4872 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.888 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9072572 kB' 'MemAvailable: 10582592 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484356 kB' 'Inactive: 1356092 kB' 'Active(anon): 122608 kB' 'Inactive(anon): 10636 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 49064 kB' 'Shmem: 10468 kB' 'KReclaimable: 79212 kB' 'Slab: 151824 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72612 kB' 'KernelStack: 4788 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.889 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.890 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9072572 kB' 'MemAvailable: 10582592 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484416 kB' 'Inactive: 1356100 kB' 'Active(anon): 122668 kB' 'Inactive(anon): 10644 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122884 kB' 'Mapped: 49064 kB' 'Shmem: 10468 kB' 'KReclaimable: 79212 kB' 'Slab: 151820 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72608 kB' 'KernelStack: 4804 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.891 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.892 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:23.893 nr_hugepages=512 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:10:23.893 resv_hugepages=0 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:23.893 surplus_hugepages=0 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:23.893 anon_hugepages=0 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9072572 kB' 'MemAvailable: 10582592 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484480 kB' 'Inactive: 1356100 kB' 'Active(anon): 122732 kB' 'Inactive(anon): 10644 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122916 kB' 'Mapped: 49064 kB' 'Shmem: 10468 kB' 'KReclaimable: 79212 kB' 'Slab: 151820 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72608 kB' 'KernelStack: 4752 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.893 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.894 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9072320 kB' 'MemUsed: 3159924 kB' 'SwapCached: 0 kB' 'Active: 484436 kB' 'Inactive: 1356100 kB' 'Active(anon): 122688 kB' 'Inactive(anon): 10644 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345456 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1717676 kB' 'Mapped: 49064 kB' 'AnonPages: 122912 kB' 'Shmem: 10468 kB' 'KernelStack: 4752 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79212 kB' 'Slab: 151820 kB' 'SReclaimable: 79212 kB' 'SUnreclaim: 72608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.895 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:23.896 node0=512 expecting 512 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:23.896 00:10:23.896 real 0m0.674s 00:10:23.896 user 0m0.334s 00:10:23.896 sys 0m0.384s 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:23.896 09:51:01 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:23.897 ************************************ 00:10:23.897 END TEST per_node_1G_alloc 00:10:23.897 ************************************ 00:10:24.155 09:51:01 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:10:24.155 09:51:01 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:24.155 09:51:01 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:24.155 09:51:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:24.155 ************************************ 00:10:24.155 START TEST even_2G_alloc 00:10:24.155 ************************************ 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:24.155 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:24.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.416 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:24.416 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023924 kB' 'MemAvailable: 9533940 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 484260 kB' 'Inactive: 1356124 kB' 'Active(anon): 122512 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122764 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151844 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72644 kB' 'KernelStack: 4728 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.416 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023924 kB' 'MemAvailable: 9533940 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 484088 kB' 'Inactive: 1356124 kB' 'Active(anon): 122340 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122592 kB' 'Mapped: 48952 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151808 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72608 kB' 'KernelStack: 4664 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.417 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.418 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.680 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023924 kB' 'MemAvailable: 9533940 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 483844 kB' 'Inactive: 1356124 kB' 'Active(anon): 122096 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122580 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151800 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72600 kB' 'KernelStack: 4600 kB' 'PageTables: 3448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.681 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.682 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:24.683 nr_hugepages=1024 00:10:24.683 resv_hugepages=0 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:24.683 surplus_hugepages=0 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:24.683 anon_hugepages=0 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023924 kB' 'MemAvailable: 9533940 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 484048 kB' 'Inactive: 1356116 kB' 'Active(anon): 122300 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122516 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151804 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72604 kB' 'KernelStack: 4628 kB' 'PageTables: 3412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.683 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.684 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023924 kB' 'MemUsed: 4208320 kB' 'SwapCached: 0 kB' 'Active: 483788 kB' 'Inactive: 1356116 kB' 'Active(anon): 122040 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1717672 kB' 'Mapped: 48764 kB' 'AnonPages: 122516 kB' 'Shmem: 10464 kB' 'KernelStack: 4612 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79200 kB' 'Slab: 151804 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.685 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.686 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:24.687 node0=1024 expecting 1024 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:24.687 00:10:24.687 real 0m0.599s 00:10:24.687 user 0m0.302s 00:10:24.687 sys 0m0.341s 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:24.687 09:51:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:24.687 ************************************ 00:10:24.687 END TEST even_2G_alloc 00:10:24.687 ************************************ 00:10:24.687 09:51:01 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:10:24.687 09:51:01 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:24.687 09:51:01 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:24.687 09:51:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:24.687 ************************************ 00:10:24.687 START TEST odd_alloc 00:10:24.687 ************************************ 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:24.687 09:51:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:25.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:25.259 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:25.259 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8019972 kB' 'MemAvailable: 9529988 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 484372 kB' 'Inactive: 1356124 kB' 'Active(anon): 122624 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123136 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151848 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72648 kB' 'KernelStack: 4676 kB' 'PageTables: 3416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.259 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8019972 kB' 'MemAvailable: 9529988 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 484552 kB' 'Inactive: 1356124 kB' 'Active(anon): 122804 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123056 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151844 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72644 kB' 'KernelStack: 4660 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.260 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.261 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8019972 kB' 'MemAvailable: 9529988 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 484032 kB' 'Inactive: 1356124 kB' 'Active(anon): 122284 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122780 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151836 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72636 kB' 'KernelStack: 4612 kB' 'PageTables: 3280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.262 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:25.263 nr_hugepages=1025 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:10:25.263 resv_hugepages=0 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:25.263 surplus_hugepages=0 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:25.263 anon_hugepages=0 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8019972 kB' 'MemAvailable: 9529988 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 484048 kB' 'Inactive: 1356124 kB' 'Active(anon): 122300 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122832 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151820 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72620 kB' 'KernelStack: 4628 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.263 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.264 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8019720 kB' 'MemUsed: 4212524 kB' 'SwapCached: 0 kB' 'Active: 484020 kB' 'Inactive: 1356124 kB' 'Active(anon): 122272 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1717672 kB' 'Mapped: 48940 kB' 'AnonPages: 122804 kB' 'Shmem: 10464 kB' 'KernelStack: 4612 kB' 'PageTables: 3276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79200 kB' 'Slab: 151820 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:25.265 node0=1025 expecting 1025 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:10:25.265 00:10:25.265 real 0m0.610s 00:10:25.265 user 0m0.281s 00:10:25.265 sys 0m0.375s 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:25.265 09:51:02 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:25.265 ************************************ 00:10:25.265 END TEST odd_alloc 00:10:25.265 ************************************ 00:10:25.265 09:51:02 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:10:25.266 09:51:02 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:25.266 09:51:02 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:25.266 09:51:02 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:25.266 ************************************ 00:10:25.266 START TEST custom_alloc 00:10:25.266 ************************************ 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:25.266 09:51:02 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:25.836 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:25.836 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:25.836 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9062792 kB' 'MemAvailable: 10572812 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484384 kB' 'Inactive: 1356136 kB' 'Active(anon): 122636 kB' 'Inactive(anon): 10672 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122664 kB' 'Mapped: 49132 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151808 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72608 kB' 'KernelStack: 4672 kB' 'PageTables: 3340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.836 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9062792 kB' 'MemAvailable: 10572812 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 483948 kB' 'Inactive: 1356128 kB' 'Active(anon): 122200 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122260 kB' 'Mapped: 49072 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151800 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72600 kB' 'KernelStack: 4656 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.837 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9062792 kB' 'MemAvailable: 10572808 kB' 'Buffers: 2436 kB' 'Cached: 1715236 kB' 'SwapCached: 0 kB' 'Active: 483860 kB' 'Inactive: 1356132 kB' 'Active(anon): 122112 kB' 'Inactive(anon): 10672 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345460 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122196 kB' 'Mapped: 49072 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151784 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72584 kB' 'KernelStack: 4592 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.838 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:10:25.839 nr_hugepages=512 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:25.839 resv_hugepages=0 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:25.839 surplus_hugepages=0 00:10:25.839 anon_hugepages=0 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9062540 kB' 'MemAvailable: 10572560 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484092 kB' 'Inactive: 1356128 kB' 'Active(anon): 122344 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122904 kB' 'Mapped: 48944 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151792 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72592 kB' 'KernelStack: 4652 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:25.839 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 9062540 kB' 'MemUsed: 3169704 kB' 'SwapCached: 0 kB' 'Active: 484048 kB' 'Inactive: 1356128 kB' 'Active(anon): 122300 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1717676 kB' 'Mapped: 48944 kB' 'AnonPages: 122596 kB' 'Shmem: 10464 kB' 'KernelStack: 4652 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79200 kB' 'Slab: 151792 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.098 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:26.099 node0=512 expecting 512 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:26.099 00:10:26.099 real 0m0.617s 00:10:26.099 user 0m0.276s 00:10:26.099 sys 0m0.387s 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:26.099 09:51:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:26.099 ************************************ 00:10:26.099 END TEST custom_alloc 00:10:26.099 ************************************ 00:10:26.099 09:51:03 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:10:26.099 09:51:03 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:26.099 09:51:03 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:26.099 09:51:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:26.099 ************************************ 00:10:26.099 START TEST no_shrink_alloc 00:10:26.099 ************************************ 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:26.099 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:26.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.357 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.357 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023164 kB' 'MemAvailable: 9533184 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484400 kB' 'Inactive: 1356128 kB' 'Active(anon): 122652 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123136 kB' 'Mapped: 49088 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151832 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72632 kB' 'KernelStack: 4756 kB' 'PageTables: 3512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53328 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.620 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.621 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.622 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023164 kB' 'MemAvailable: 9533184 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484288 kB' 'Inactive: 1356128 kB' 'Active(anon): 122540 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123000 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151832 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72632 kB' 'KernelStack: 4708 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.623 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.624 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.625 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023164 kB' 'MemAvailable: 9533184 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484108 kB' 'Inactive: 1356120 kB' 'Active(anon): 122360 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122812 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151824 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72624 kB' 'KernelStack: 4688 kB' 'PageTables: 3340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.626 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.627 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.628 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:26.629 nr_hugepages=1024 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:26.629 resv_hugepages=0 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:26.629 surplus_hugepages=0 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:26.629 anon_hugepages=0 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023164 kB' 'MemAvailable: 9533184 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 484108 kB' 'Inactive: 1356120 kB' 'Active(anon): 122360 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122852 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 79200 kB' 'Slab: 151820 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72620 kB' 'KernelStack: 4720 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 354208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.629 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.630 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8023164 kB' 'MemUsed: 4209080 kB' 'SwapCached: 0 kB' 'Active: 484256 kB' 'Inactive: 1356120 kB' 'Active(anon): 122508 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1717676 kB' 'Mapped: 48764 kB' 'AnonPages: 122968 kB' 'Shmem: 10464 kB' 'KernelStack: 4720 kB' 'PageTables: 3404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79200 kB' 'Slab: 151820 kB' 'SReclaimable: 79200 kB' 'SUnreclaim: 72620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.631 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.632 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:26.633 node0=1024 expecting 1024 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:26.633 09:51:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:27.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.210 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.210 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.210 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8031260 kB' 'MemAvailable: 9541272 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 479644 kB' 'Inactive: 1356128 kB' 'Active(anon): 117896 kB' 'Inactive(anon): 10664 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 118152 kB' 'Mapped: 48148 kB' 'Shmem: 10464 kB' 'KReclaimable: 79184 kB' 'Slab: 151540 kB' 'SReclaimable: 79184 kB' 'SUnreclaim: 72356 kB' 'KernelStack: 4612 kB' 'PageTables: 3192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53232 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.210 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.211 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8031260 kB' 'MemAvailable: 9541272 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 479216 kB' 'Inactive: 1356120 kB' 'Active(anon): 117468 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117980 kB' 'Mapped: 47756 kB' 'Shmem: 10464 kB' 'KReclaimable: 79184 kB' 'Slab: 151524 kB' 'SReclaimable: 79184 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 4576 kB' 'PageTables: 2884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53200 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.212 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.213 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8031260 kB' 'MemAvailable: 9541272 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 479068 kB' 'Inactive: 1356120 kB' 'Active(anon): 117320 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117832 kB' 'Mapped: 47772 kB' 'Shmem: 10464 kB' 'KReclaimable: 79184 kB' 'Slab: 151524 kB' 'SReclaimable: 79184 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 4560 kB' 'PageTables: 2844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.214 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.215 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:27.216 nr_hugepages=1024 00:10:27.216 resv_hugepages=0 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:27.216 surplus_hugepages=0 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:27.216 anon_hugepages=0 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8031260 kB' 'MemAvailable: 9541272 kB' 'Buffers: 2436 kB' 'Cached: 1715240 kB' 'SwapCached: 0 kB' 'Active: 479068 kB' 'Inactive: 1356120 kB' 'Active(anon): 117320 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 117832 kB' 'Mapped: 47772 kB' 'Shmem: 10464 kB' 'KReclaimable: 79184 kB' 'Slab: 151524 kB' 'SReclaimable: 79184 kB' 'SUnreclaim: 72340 kB' 'KernelStack: 4544 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 335812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53216 kB' 'VmallocChunk: 0 kB' 'Percpu: 6000 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 6133760 kB' 'DirectMap1G: 8388608 kB' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.216 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.217 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232244 kB' 'MemFree: 8031260 kB' 'MemUsed: 4200984 kB' 'SwapCached: 0 kB' 'Active: 479400 kB' 'Inactive: 1356120 kB' 'Active(anon): 117652 kB' 'Inactive(anon): 10656 kB' 'Active(file): 361748 kB' 'Inactive(file): 1345464 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1717676 kB' 'Mapped: 47772 kB' 'AnonPages: 118164 kB' 'Shmem: 10464 kB' 'KernelStack: 4576 kB' 'PageTables: 2876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79184 kB' 'Slab: 151524 kB' 'SReclaimable: 79184 kB' 'SUnreclaim: 72340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.218 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:27.219 node0=1024 expecting 1024 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:27.219 00:10:27.219 real 0m1.182s 00:10:27.219 user 0m0.546s 00:10:27.219 sys 0m0.719s 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:27.219 09:51:04 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:27.219 ************************************ 00:10:27.219 END TEST no_shrink_alloc 00:10:27.219 ************************************ 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:27.219 09:51:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:27.219 00:10:27.219 real 0m5.332s 00:10:27.219 user 0m2.408s 00:10:27.220 sys 0m3.167s 00:10:27.220 09:51:04 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:27.220 09:51:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:27.220 ************************************ 00:10:27.220 END TEST hugepages 00:10:27.220 ************************************ 00:10:27.478 09:51:04 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:10:27.478 09:51:04 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:27.478 09:51:04 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:27.478 09:51:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:27.478 ************************************ 00:10:27.478 START TEST driver 00:10:27.478 ************************************ 00:10:27.478 09:51:04 setup.sh.driver -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:10:27.478 * Looking for test storage... 00:10:27.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:27.478 09:51:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:10:27.478 09:51:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:27.478 09:51:04 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:28.044 09:51:05 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:10:28.044 09:51:05 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:28.044 09:51:05 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:28.044 09:51:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:10:28.044 ************************************ 00:10:28.044 START TEST guess_driver 00:10:28.044 ************************************ 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:10:28.044 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:10:28.302 insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:10:28.302 Looking for driver=uio_pci_generic 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:10:28.302 09:51:05 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:28.866 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:10:28.866 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:10:28.866 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:29.124 09:51:06 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:30.058 00:10:30.058 real 0m1.733s 00:10:30.058 user 0m0.603s 00:10:30.058 sys 0m1.171s 00:10:30.058 09:51:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:30.058 09:51:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:10:30.058 ************************************ 00:10:30.058 END TEST guess_driver 00:10:30.058 ************************************ 00:10:30.058 00:10:30.058 real 0m2.588s 00:10:30.058 user 0m0.859s 00:10:30.058 sys 0m1.832s 00:10:30.058 09:51:07 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:30.058 09:51:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:10:30.058 ************************************ 00:10:30.058 END TEST driver 00:10:30.058 ************************************ 00:10:30.058 09:51:07 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:10:30.058 09:51:07 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:30.058 09:51:07 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:30.058 09:51:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:30.058 ************************************ 00:10:30.058 START TEST devices 00:10:30.058 ************************************ 00:10:30.058 09:51:07 setup.sh.devices -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:10:30.058 * Looking for test storage... 00:10:30.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:30.058 09:51:07 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:10:30.058 09:51:07 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:10:30.058 09:51:07 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:30.058 09:51:07 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n2 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n3 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:30.992 09:51:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:10:30.992 No valid GPT data, bailing 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:10:30.992 09:51:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:30.992 09:51:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:30.992 09:51:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:10:30.992 No valid GPT data, bailing 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:30.992 09:51:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:10:30.992 09:51:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:10:30.992 09:51:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:10:30.992 09:51:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:30.992 09:51:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:30.993 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:10:30.993 09:51:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:10:30.993 09:51:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:10:31.252 No valid GPT data, bailing 00:10:31.252 09:51:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:10:31.252 09:51:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:31.252 09:51:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:10:31.252 09:51:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:10:31.252 09:51:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:10:31.252 09:51:08 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:10:31.252 09:51:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:10:31.252 09:51:08 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:10:31.252 No valid GPT data, bailing 00:10:31.252 09:51:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:31.252 09:51:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:10:31.252 09:51:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:10:31.252 09:51:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:10:31.252 09:51:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:10:31.252 09:51:08 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:10:31.252 09:51:08 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:10:31.252 09:51:08 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:31.252 09:51:08 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:31.252 09:51:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:10:31.252 ************************************ 00:10:31.252 START TEST nvme_mount 00:10:31.252 ************************************ 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:10:31.252 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:31.253 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:10:31.253 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:10:31.253 09:51:08 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:10:32.188 Creating new GPT entries in memory. 00:10:32.188 GPT data structures destroyed! You may now partition the disk using fdisk or 00:10:32.188 other utilities. 00:10:32.188 09:51:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:10:32.188 09:51:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:32.188 09:51:09 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:10:32.188 09:51:09 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:10:32.188 09:51:09 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:10:33.623 Creating new GPT entries in memory. 00:10:33.623 The operation has completed successfully. 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58181 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:33.623 09:51:10 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:33.881 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:33.881 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:33.881 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:33.881 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:34.139 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:34.139 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:10:34.140 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:10:34.140 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:10:34.398 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:34.398 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:34.398 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:34.398 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:34.398 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:10:34.399 09:51:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:34.657 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:34.657 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:10:34.657 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:10:34.657 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:34.657 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:34.657 09:51:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:34.657 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:34.657 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:10:34.915 09:51:12 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:35.172 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:35.172 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:10:35.172 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:10:35.172 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:35.172 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:35.172 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:35.430 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:35.430 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:35.430 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:35.430 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:10:35.688 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:10:35.688 00:10:35.688 real 0m4.375s 00:10:35.688 user 0m0.770s 00:10:35.688 sys 0m1.368s 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:35.688 09:51:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:10:35.688 ************************************ 00:10:35.688 END TEST nvme_mount 00:10:35.688 ************************************ 00:10:35.688 09:51:12 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:10:35.688 09:51:12 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:35.688 09:51:12 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:35.688 09:51:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:10:35.688 ************************************ 00:10:35.688 START TEST dm_mount 00:10:35.688 ************************************ 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:10:35.688 09:51:12 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:10:36.628 Creating new GPT entries in memory. 00:10:36.628 GPT data structures destroyed! You may now partition the disk using fdisk or 00:10:36.628 other utilities. 00:10:36.628 09:51:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:10:36.628 09:51:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:36.628 09:51:13 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:10:36.628 09:51:13 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:10:36.628 09:51:13 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:10:38.005 Creating new GPT entries in memory. 00:10:38.005 The operation has completed successfully. 00:10:38.005 09:51:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:10:38.005 09:51:15 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:38.005 09:51:15 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:10:38.005 09:51:15 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:10:38.005 09:51:15 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:10:38.940 The operation has completed successfully. 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 58618 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:10:38.940 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:39.199 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.199 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:10:39.199 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:10:39.199 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.199 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.199 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.199 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.199 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:10:39.457 09:51:16 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:39.715 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.715 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:10:39.715 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:10:39.715 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.715 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.715 09:51:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:10:39.974 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:10:39.974 00:10:39.974 real 0m4.395s 00:10:39.974 user 0m0.476s 00:10:39.974 sys 0m0.873s 00:10:39.974 ************************************ 00:10:39.974 END TEST dm_mount 00:10:39.974 ************************************ 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:39.974 09:51:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:10:40.232 09:51:17 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:10:40.232 09:51:17 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:10:40.232 09:51:17 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:40.232 09:51:17 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:40.232 09:51:17 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:10:40.232 09:51:17 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:10:40.232 09:51:17 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:10:40.490 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:40.490 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:40.490 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:40.490 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:40.490 09:51:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:10:40.490 09:51:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:10:40.490 09:51:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:10:40.490 09:51:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:40.490 09:51:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:10:40.490 09:51:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:10:40.490 09:51:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:10:40.490 00:10:40.490 real 0m10.423s 00:10:40.490 user 0m1.899s 00:10:40.490 sys 0m2.963s 00:10:40.490 09:51:17 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:40.490 09:51:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:10:40.490 ************************************ 00:10:40.490 END TEST devices 00:10:40.490 ************************************ 00:10:40.490 00:10:40.490 real 0m23.935s 00:10:40.490 user 0m7.424s 00:10:40.490 sys 0m11.277s 00:10:40.490 09:51:17 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:40.490 09:51:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:40.490 ************************************ 00:10:40.490 END TEST setup.sh 00:10:40.490 ************************************ 00:10:40.490 09:51:17 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:41.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:41.423 Hugepages 00:10:41.423 node hugesize free / total 00:10:41.423 node0 1048576kB 0 / 0 00:10:41.423 node0 2048kB 2048 / 2048 00:10:41.423 00:10:41.423 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:41.423 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:10:41.423 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:10:41.423 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:10:41.423 09:51:18 -- spdk/autotest.sh@130 -- # uname -s 00:10:41.423 09:51:18 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:10:41.423 09:51:18 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:10:41.423 09:51:18 -- common/autotest_common.sh@1528 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:42.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:42.357 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.357 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:42.357 09:51:19 -- common/autotest_common.sh@1529 -- # sleep 1 00:10:43.730 09:51:20 -- common/autotest_common.sh@1530 -- # bdfs=() 00:10:43.730 09:51:20 -- common/autotest_common.sh@1530 -- # local bdfs 00:10:43.730 09:51:20 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:10:43.730 09:51:20 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:10:43.730 09:51:20 -- common/autotest_common.sh@1510 -- # bdfs=() 00:10:43.730 09:51:20 -- common/autotest_common.sh@1510 -- # local bdfs 00:10:43.730 09:51:20 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:43.730 09:51:20 -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:43.730 09:51:20 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:10:43.730 09:51:20 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:10:43.730 09:51:20 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:43.730 09:51:20 -- common/autotest_common.sh@1533 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:43.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.987 Waiting for block devices as requested 00:10:43.987 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.246 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.246 09:51:21 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:10:44.246 09:51:21 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:10:44.246 09:51:21 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:44.246 09:51:21 -- common/autotest_common.sh@1499 -- # grep 0000:00:10.0/nvme/nvme 00:10:44.246 09:51:21 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:44.246 09:51:21 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:10:44.246 09:51:21 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:10:44.246 09:51:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme1 00:10:44.246 09:51:21 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme1 00:10:44.246 09:51:21 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme1 ]] 00:10:44.246 09:51:21 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme1 00:10:44.246 09:51:21 -- common/autotest_common.sh@1542 -- # grep oacs 00:10:44.246 09:51:21 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:10:44.246 09:51:21 -- common/autotest_common.sh@1542 -- # oacs=' 0x12a' 00:10:44.246 09:51:21 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:10:44.246 09:51:21 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:10:44.246 09:51:21 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme1 00:10:44.246 09:51:21 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:10:44.246 09:51:21 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:10:44.246 09:51:21 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:10:44.246 09:51:21 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:10:44.246 09:51:21 -- common/autotest_common.sh@1554 -- # continue 00:10:44.246 09:51:21 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:10:44.246 09:51:21 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:10:44.246 09:51:21 -- common/autotest_common.sh@1499 -- # grep 0000:00:11.0/nvme/nvme 00:10:44.246 09:51:21 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:10:44.246 09:51:21 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:44.246 09:51:21 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:10:44.246 09:51:21 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:10:44.246 09:51:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:10:44.246 09:51:21 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:10:44.246 09:51:21 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:10:44.246 09:51:21 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:10:44.246 09:51:21 -- common/autotest_common.sh@1542 -- # grep oacs 00:10:44.246 09:51:21 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:10:44.246 09:51:21 -- common/autotest_common.sh@1542 -- # oacs=' 0x12a' 00:10:44.246 09:51:21 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:10:44.246 09:51:21 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:10:44.246 09:51:21 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:10:44.246 09:51:21 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:10:44.246 09:51:21 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:10:44.246 09:51:21 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:10:44.246 09:51:21 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:10:44.246 09:51:21 -- common/autotest_common.sh@1554 -- # continue 00:10:44.246 09:51:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:10:44.246 09:51:21 -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:44.246 09:51:21 -- common/autotest_common.sh@10 -- # set +x 00:10:44.246 09:51:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:10:44.246 09:51:21 -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:44.246 09:51:21 -- common/autotest_common.sh@10 -- # set +x 00:10:44.246 09:51:21 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:45.179 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.179 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.437 09:51:22 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:10:45.437 09:51:22 -- common/autotest_common.sh@727 -- # xtrace_disable 00:10:45.437 09:51:22 -- common/autotest_common.sh@10 -- # set +x 00:10:45.437 09:51:22 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:10:45.437 09:51:22 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:10:45.437 09:51:22 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:10:45.437 09:51:22 -- common/autotest_common.sh@1574 -- # bdfs=() 00:10:45.437 09:51:22 -- common/autotest_common.sh@1574 -- # local bdfs 00:10:45.437 09:51:22 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:10:45.437 09:51:22 -- common/autotest_common.sh@1510 -- # bdfs=() 00:10:45.437 09:51:22 -- common/autotest_common.sh@1510 -- # local bdfs 00:10:45.437 09:51:22 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:45.437 09:51:22 -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:45.437 09:51:22 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:10:45.437 09:51:22 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:10:45.437 09:51:22 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:10:45.437 09:51:22 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:10:45.437 09:51:22 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:10:45.437 09:51:22 -- common/autotest_common.sh@1577 -- # device=0x0010 00:10:45.437 09:51:22 -- common/autotest_common.sh@1578 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:45.437 09:51:22 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:10:45.437 09:51:22 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:10:45.437 09:51:22 -- common/autotest_common.sh@1577 -- # device=0x0010 00:10:45.437 09:51:22 -- common/autotest_common.sh@1578 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:10:45.437 09:51:22 -- common/autotest_common.sh@1583 -- # printf '%s\n' 00:10:45.437 09:51:22 -- common/autotest_common.sh@1589 -- # [[ -z '' ]] 00:10:45.437 09:51:22 -- common/autotest_common.sh@1590 -- # return 0 00:10:45.437 09:51:22 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:10:45.437 09:51:22 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:10:45.437 09:51:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:45.437 09:51:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:10:45.437 09:51:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:10:45.437 09:51:22 -- common/autotest_common.sh@721 -- # xtrace_disable 00:10:45.437 09:51:22 -- common/autotest_common.sh@10 -- # set +x 00:10:45.437 09:51:22 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:45.437 09:51:22 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:45.437 09:51:22 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:45.437 09:51:22 -- common/autotest_common.sh@10 -- # set +x 00:10:45.437 ************************************ 00:10:45.437 START TEST env 00:10:45.437 ************************************ 00:10:45.437 09:51:22 env -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:10:45.696 * Looking for test storage... 00:10:45.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:10:45.696 09:51:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:45.696 09:51:22 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:45.696 09:51:22 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:45.696 09:51:22 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.696 ************************************ 00:10:45.696 START TEST env_memory 00:10:45.696 ************************************ 00:10:45.696 09:51:22 env.env_memory -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:10:45.696 00:10:45.696 00:10:45.696 CUnit - A unit testing framework for C - Version 2.1-3 00:10:45.696 http://cunit.sourceforge.net/ 00:10:45.696 00:10:45.696 00:10:45.696 Suite: memory 00:10:45.696 Test: alloc and free memory map ...[2024-05-15 09:51:22.951244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:10:45.696 passed 00:10:45.696 Test: mem map translation ...[2024-05-15 09:51:22.997810] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:10:45.696 [2024-05-15 09:51:22.998172] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:10:45.696 [2024-05-15 09:51:22.998376] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:10:45.696 [2024-05-15 09:51:22.998515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:10:45.696 passed 00:10:45.696 Test: mem map registration ...[2024-05-15 09:51:23.064974] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:10:45.696 [2024-05-15 09:51:23.065330] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:10:45.956 passed 00:10:45.956 Test: mem map adjacent registrations ...passed 00:10:45.956 00:10:45.956 Run Summary: Type Total Ran Passed Failed Inactive 00:10:45.956 suites 1 1 n/a 0 0 00:10:45.956 tests 4 4 4 0 0 00:10:45.956 asserts 152 152 152 0 n/a 00:10:45.956 00:10:45.956 Elapsed time = 0.252 seconds 00:10:45.956 00:10:45.956 real 0m0.273s 00:10:45.956 user 0m0.251s 00:10:45.956 sys 0m0.017s 00:10:45.956 09:51:23 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:45.956 09:51:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:10:45.956 ************************************ 00:10:45.956 END TEST env_memory 00:10:45.956 ************************************ 00:10:45.956 09:51:23 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:45.956 09:51:23 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:45.957 09:51:23 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:45.957 09:51:23 env -- common/autotest_common.sh@10 -- # set +x 00:10:45.957 ************************************ 00:10:45.957 START TEST env_vtophys 00:10:45.957 ************************************ 00:10:45.957 09:51:23 env.env_vtophys -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:10:45.957 EAL: lib.eal log level changed from notice to debug 00:10:45.957 EAL: Detected lcore 0 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 1 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 2 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 3 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 4 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 5 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 6 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 7 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 8 as core 0 on socket 0 00:10:45.957 EAL: Detected lcore 9 as core 0 on socket 0 00:10:45.957 EAL: Maximum logical cores by configuration: 128 00:10:45.957 EAL: Detected CPU lcores: 10 00:10:45.957 EAL: Detected NUMA nodes: 1 00:10:45.957 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:10:45.957 EAL: Detected shared linkage of DPDK 00:10:45.957 EAL: No shared files mode enabled, IPC will be disabled 00:10:45.957 EAL: Selected IOVA mode 'PA' 00:10:45.957 EAL: Probing VFIO support... 00:10:45.957 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:45.957 EAL: VFIO modules not loaded, skipping VFIO support... 00:10:45.957 EAL: Ask a virtual area of 0x2e000 bytes 00:10:45.957 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:10:45.957 EAL: Setting up physically contiguous memory... 00:10:45.957 EAL: Setting maximum number of open files to 524288 00:10:45.957 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:10:45.957 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:10:45.957 EAL: Ask a virtual area of 0x61000 bytes 00:10:45.957 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:10:45.957 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:45.957 EAL: Ask a virtual area of 0x400000000 bytes 00:10:45.957 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:10:45.957 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:10:45.957 EAL: Ask a virtual area of 0x61000 bytes 00:10:45.957 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:10:45.957 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:45.957 EAL: Ask a virtual area of 0x400000000 bytes 00:10:45.957 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:10:45.957 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:10:45.957 EAL: Ask a virtual area of 0x61000 bytes 00:10:45.957 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:10:45.957 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:45.957 EAL: Ask a virtual area of 0x400000000 bytes 00:10:45.957 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:10:45.957 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:10:45.957 EAL: Ask a virtual area of 0x61000 bytes 00:10:45.957 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:10:45.957 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:10:45.957 EAL: Ask a virtual area of 0x400000000 bytes 00:10:45.957 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:10:45.957 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:10:45.957 EAL: Hugepages will be freed exactly as allocated. 00:10:45.957 EAL: No shared files mode enabled, IPC is disabled 00:10:45.957 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: TSC frequency is ~2100000 KHz 00:10:46.215 EAL: Main lcore 0 is ready (tid=7fce18058a00;cpuset=[0]) 00:10:46.215 EAL: Trying to obtain current memory policy. 00:10:46.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.215 EAL: Restoring previous memory policy: 0 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was expanded by 2MB 00:10:46.215 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:10:46.215 EAL: No PCI address specified using 'addr=' in: bus=pci 00:10:46.215 EAL: Mem event callback 'spdk:(nil)' registered 00:10:46.215 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:10:46.215 00:10:46.215 00:10:46.215 CUnit - A unit testing framework for C - Version 2.1-3 00:10:46.215 http://cunit.sourceforge.net/ 00:10:46.215 00:10:46.215 00:10:46.215 Suite: components_suite 00:10:46.215 Test: vtophys_malloc_test ...passed 00:10:46.215 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:10:46.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.215 EAL: Restoring previous memory policy: 4 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was expanded by 4MB 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was shrunk by 4MB 00:10:46.215 EAL: Trying to obtain current memory policy. 00:10:46.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.215 EAL: Restoring previous memory policy: 4 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was expanded by 6MB 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was shrunk by 6MB 00:10:46.215 EAL: Trying to obtain current memory policy. 00:10:46.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.215 EAL: Restoring previous memory policy: 4 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was expanded by 10MB 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was shrunk by 10MB 00:10:46.215 EAL: Trying to obtain current memory policy. 00:10:46.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.215 EAL: Restoring previous memory policy: 4 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was expanded by 18MB 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was shrunk by 18MB 00:10:46.215 EAL: Trying to obtain current memory policy. 00:10:46.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.215 EAL: Restoring previous memory policy: 4 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was expanded by 34MB 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was shrunk by 34MB 00:10:46.215 EAL: Trying to obtain current memory policy. 00:10:46.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.215 EAL: Restoring previous memory policy: 4 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was expanded by 66MB 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was shrunk by 66MB 00:10:46.215 EAL: Trying to obtain current memory policy. 00:10:46.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.215 EAL: Restoring previous memory policy: 4 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.215 EAL: request: mp_malloc_sync 00:10:46.215 EAL: No shared files mode enabled, IPC is disabled 00:10:46.215 EAL: Heap on socket 0 was expanded by 130MB 00:10:46.215 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.473 EAL: request: mp_malloc_sync 00:10:46.473 EAL: No shared files mode enabled, IPC is disabled 00:10:46.473 EAL: Heap on socket 0 was shrunk by 130MB 00:10:46.473 EAL: Trying to obtain current memory policy. 00:10:46.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.473 EAL: Restoring previous memory policy: 4 00:10:46.473 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.473 EAL: request: mp_malloc_sync 00:10:46.473 EAL: No shared files mode enabled, IPC is disabled 00:10:46.473 EAL: Heap on socket 0 was expanded by 258MB 00:10:46.473 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.730 EAL: request: mp_malloc_sync 00:10:46.730 EAL: No shared files mode enabled, IPC is disabled 00:10:46.730 EAL: Heap on socket 0 was shrunk by 258MB 00:10:46.730 EAL: Trying to obtain current memory policy. 00:10:46.730 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:46.730 EAL: Restoring previous memory policy: 4 00:10:46.730 EAL: Calling mem event callback 'spdk:(nil)' 00:10:46.730 EAL: request: mp_malloc_sync 00:10:46.730 EAL: No shared files mode enabled, IPC is disabled 00:10:46.730 EAL: Heap on socket 0 was expanded by 514MB 00:10:46.989 EAL: Calling mem event callback 'spdk:(nil)' 00:10:47.247 EAL: request: mp_malloc_sync 00:10:47.247 EAL: No shared files mode enabled, IPC is disabled 00:10:47.247 EAL: Heap on socket 0 was shrunk by 514MB 00:10:47.247 EAL: Trying to obtain current memory policy. 00:10:47.247 EAL: Setting policy MPOL_PREFERRED for socket 0 00:10:47.505 EAL: Restoring previous memory policy: 4 00:10:47.505 EAL: Calling mem event callback 'spdk:(nil)' 00:10:47.505 EAL: request: mp_malloc_sync 00:10:47.505 EAL: No shared files mode enabled, IPC is disabled 00:10:47.505 EAL: Heap on socket 0 was expanded by 1026MB 00:10:47.764 EAL: Calling mem event callback 'spdk:(nil)' 00:10:48.022 EAL: request: mp_malloc_sync 00:10:48.022 EAL: No shared files mode enabled, IPC is disabled 00:10:48.022 EAL: Heap on socket 0 was shrunk by 1026MB 00:10:48.022 passed 00:10:48.022 00:10:48.022 Run Summary: Type Total Ran Passed Failed Inactive 00:10:48.022 suites 1 1 n/a 0 0 00:10:48.022 tests 2 2 2 0 0 00:10:48.022 asserts 6466 6466 6466 0 n/a 00:10:48.022 00:10:48.022 Elapsed time = 1.964 seconds 00:10:48.022 EAL: Calling mem event callback 'spdk:(nil)' 00:10:48.022 EAL: request: mp_malloc_sync 00:10:48.022 EAL: No shared files mode enabled, IPC is disabled 00:10:48.022 EAL: Heap on socket 0 was shrunk by 2MB 00:10:48.022 EAL: No shared files mode enabled, IPC is disabled 00:10:48.022 EAL: No shared files mode enabled, IPC is disabled 00:10:48.022 EAL: No shared files mode enabled, IPC is disabled 00:10:48.022 00:10:48.022 real 0m2.172s 00:10:48.022 user 0m1.221s 00:10:48.022 sys 0m0.815s 00:10:48.022 09:51:25 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:48.022 09:51:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:10:48.022 ************************************ 00:10:48.022 END TEST env_vtophys 00:10:48.022 ************************************ 00:10:48.280 09:51:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:48.280 09:51:25 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:48.280 09:51:25 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:48.280 09:51:25 env -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 ************************************ 00:10:48.280 START TEST env_pci 00:10:48.280 ************************************ 00:10:48.280 09:51:25 env.env_pci -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:10:48.280 00:10:48.280 00:10:48.280 CUnit - A unit testing framework for C - Version 2.1-3 00:10:48.280 http://cunit.sourceforge.net/ 00:10:48.280 00:10:48.280 00:10:48.280 Suite: pci 00:10:48.280 Test: pci_hook ...[2024-05-15 09:51:25.467608] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59818 has claimed it 00:10:48.280 passed 00:10:48.280 00:10:48.280 EAL: Cannot find device (10000:00:01.0) 00:10:48.280 EAL: Failed to attach device on primary process 00:10:48.280 Run Summary: Type Total Ran Passed Failed Inactive 00:10:48.280 suites 1 1 n/a 0 0 00:10:48.280 tests 1 1 1 0 0 00:10:48.280 asserts 25 25 25 0 n/a 00:10:48.280 00:10:48.280 Elapsed time = 0.003 seconds 00:10:48.280 00:10:48.280 real 0m0.023s 00:10:48.280 user 0m0.011s 00:10:48.280 sys 0m0.012s 00:10:48.280 09:51:25 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:48.280 09:51:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:10:48.280 ************************************ 00:10:48.280 END TEST env_pci 00:10:48.280 ************************************ 00:10:48.280 09:51:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:10:48.280 09:51:25 env -- env/env.sh@15 -- # uname 00:10:48.280 09:51:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:10:48.281 09:51:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:10:48.281 09:51:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:48.281 09:51:25 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:10:48.281 09:51:25 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:48.281 09:51:25 env -- common/autotest_common.sh@10 -- # set +x 00:10:48.281 ************************************ 00:10:48.281 START TEST env_dpdk_post_init 00:10:48.281 ************************************ 00:10:48.281 09:51:25 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:10:48.281 EAL: Detected CPU lcores: 10 00:10:48.281 EAL: Detected NUMA nodes: 1 00:10:48.281 EAL: Detected shared linkage of DPDK 00:10:48.281 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:48.281 EAL: Selected IOVA mode 'PA' 00:10:48.539 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:48.539 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:10:48.539 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:10:48.539 Starting DPDK initialization... 00:10:48.539 Starting SPDK post initialization... 00:10:48.539 SPDK NVMe probe 00:10:48.539 Attaching to 0000:00:10.0 00:10:48.539 Attaching to 0000:00:11.0 00:10:48.539 Attached to 0000:00:10.0 00:10:48.539 Attached to 0000:00:11.0 00:10:48.539 Cleaning up... 00:10:48.539 00:10:48.539 real 0m0.201s 00:10:48.539 user 0m0.042s 00:10:48.539 sys 0m0.059s 00:10:48.539 09:51:25 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:48.539 09:51:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:10:48.539 ************************************ 00:10:48.539 END TEST env_dpdk_post_init 00:10:48.539 ************************************ 00:10:48.539 09:51:25 env -- env/env.sh@26 -- # uname 00:10:48.539 09:51:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:10:48.539 09:51:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:48.539 09:51:25 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:48.539 09:51:25 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:48.539 09:51:25 env -- common/autotest_common.sh@10 -- # set +x 00:10:48.539 ************************************ 00:10:48.539 START TEST env_mem_callbacks 00:10:48.539 ************************************ 00:10:48.539 09:51:25 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:10:48.539 EAL: Detected CPU lcores: 10 00:10:48.539 EAL: Detected NUMA nodes: 1 00:10:48.539 EAL: Detected shared linkage of DPDK 00:10:48.539 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:10:48.539 EAL: Selected IOVA mode 'PA' 00:10:48.798 TELEMETRY: No legacy callbacks, legacy socket not created 00:10:48.798 00:10:48.798 00:10:48.798 CUnit - A unit testing framework for C - Version 2.1-3 00:10:48.798 http://cunit.sourceforge.net/ 00:10:48.798 00:10:48.798 00:10:48.798 Suite: memory 00:10:48.798 Test: test ... 00:10:48.798 register 0x200000200000 2097152 00:10:48.798 malloc 3145728 00:10:48.798 register 0x200000400000 4194304 00:10:48.798 buf 0x200000500000 len 3145728 PASSED 00:10:48.798 malloc 64 00:10:48.798 buf 0x2000004fff40 len 64 PASSED 00:10:48.798 malloc 4194304 00:10:48.798 register 0x200000800000 6291456 00:10:48.798 buf 0x200000a00000 len 4194304 PASSED 00:10:48.798 free 0x200000500000 3145728 00:10:48.798 free 0x2000004fff40 64 00:10:48.798 unregister 0x200000400000 4194304 PASSED 00:10:48.798 free 0x200000a00000 4194304 00:10:48.798 unregister 0x200000800000 6291456 PASSED 00:10:48.798 malloc 8388608 00:10:48.798 register 0x200000400000 10485760 00:10:48.798 buf 0x200000600000 len 8388608 PASSED 00:10:48.798 free 0x200000600000 8388608 00:10:48.798 unregister 0x200000400000 10485760 PASSED 00:10:48.798 passed 00:10:48.798 00:10:48.798 Run Summary: Type Total Ran Passed Failed Inactive 00:10:48.798 suites 1 1 n/a 0 0 00:10:48.798 tests 1 1 1 0 0 00:10:48.798 asserts 15 15 15 0 n/a 00:10:48.798 00:10:48.798 Elapsed time = 0.010 seconds 00:10:48.798 00:10:48.798 real 0m0.152s 00:10:48.798 user 0m0.018s 00:10:48.798 sys 0m0.033s 00:10:48.798 09:51:25 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:48.798 09:51:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:10:48.798 ************************************ 00:10:48.798 END TEST env_mem_callbacks 00:10:48.798 ************************************ 00:10:48.798 00:10:48.798 real 0m3.220s 00:10:48.798 user 0m1.664s 00:10:48.798 sys 0m1.211s 00:10:48.798 09:51:25 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:48.798 09:51:25 env -- common/autotest_common.sh@10 -- # set +x 00:10:48.798 ************************************ 00:10:48.798 END TEST env 00:10:48.798 ************************************ 00:10:48.798 09:51:26 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:48.798 09:51:26 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:48.798 09:51:26 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:48.798 09:51:26 -- common/autotest_common.sh@10 -- # set +x 00:10:48.798 ************************************ 00:10:48.798 START TEST rpc 00:10:48.798 ************************************ 00:10:48.798 09:51:26 rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:10:48.798 * Looking for test storage... 00:10:48.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:48.798 09:51:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=59933 00:10:48.798 09:51:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:10:48.798 09:51:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:48.798 09:51:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 59933 00:10:48.798 09:51:26 rpc -- common/autotest_common.sh@828 -- # '[' -z 59933 ']' 00:10:48.798 09:51:26 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.798 09:51:26 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:48.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.798 09:51:26 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.798 09:51:26 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:48.798 09:51:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.056 [2024-05-15 09:51:26.245200] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:10:49.056 [2024-05-15 09:51:26.246319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59933 ] 00:10:49.056 [2024-05-15 09:51:26.401391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.314 [2024-05-15 09:51:26.579372] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:10:49.314 [2024-05-15 09:51:26.579476] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 59933' to capture a snapshot of events at runtime. 00:10:49.314 [2024-05-15 09:51:26.579504] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.314 [2024-05-15 09:51:26.579528] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.314 [2024-05-15 09:51:26.579549] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid59933 for offline analysis/debug. 00:10:49.314 [2024-05-15 09:51:26.579629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.881 09:51:27 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:49.881 09:51:27 rpc -- common/autotest_common.sh@861 -- # return 0 00:10:49.881 09:51:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:49.881 09:51:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:10:49.881 09:51:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:10:49.881 09:51:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:10:49.881 09:51:27 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:49.881 09:51:27 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:49.881 09:51:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.140 ************************************ 00:10:50.140 START TEST rpc_integrity 00:10:50.140 ************************************ 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:50.140 { 00:10:50.140 "aliases": [ 00:10:50.140 "e74e4e74-912d-4cfc-a62a-0e1a049dec21" 00:10:50.140 ], 00:10:50.140 "assigned_rate_limits": { 00:10:50.140 "r_mbytes_per_sec": 0, 00:10:50.140 "rw_ios_per_sec": 0, 00:10:50.140 "rw_mbytes_per_sec": 0, 00:10:50.140 "w_mbytes_per_sec": 0 00:10:50.140 }, 00:10:50.140 "block_size": 512, 00:10:50.140 "claimed": false, 00:10:50.140 "driver_specific": {}, 00:10:50.140 "memory_domains": [ 00:10:50.140 { 00:10:50.140 "dma_device_id": "system", 00:10:50.140 "dma_device_type": 1 00:10:50.140 }, 00:10:50.140 { 00:10:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.140 "dma_device_type": 2 00:10:50.140 } 00:10:50.140 ], 00:10:50.140 "name": "Malloc0", 00:10:50.140 "num_blocks": 16384, 00:10:50.140 "product_name": "Malloc disk", 00:10:50.140 "supported_io_types": { 00:10:50.140 "abort": true, 00:10:50.140 "compare": false, 00:10:50.140 "compare_and_write": false, 00:10:50.140 "flush": true, 00:10:50.140 "nvme_admin": false, 00:10:50.140 "nvme_io": false, 00:10:50.140 "read": true, 00:10:50.140 "reset": true, 00:10:50.140 "unmap": true, 00:10:50.140 "write": true, 00:10:50.140 "write_zeroes": true 00:10:50.140 }, 00:10:50.140 "uuid": "e74e4e74-912d-4cfc-a62a-0e1a049dec21", 00:10:50.140 "zoned": false 00:10:50.140 } 00:10:50.140 ]' 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.140 [2024-05-15 09:51:27.406959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:10:50.140 [2024-05-15 09:51:27.407022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.140 [2024-05-15 09:51:27.407043] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x133be60 00:10:50.140 [2024-05-15 09:51:27.407054] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.140 [2024-05-15 09:51:27.408978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.140 [2024-05-15 09:51:27.409014] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:50.140 Passthru0 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.140 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.140 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:50.140 { 00:10:50.140 "aliases": [ 00:10:50.140 "e74e4e74-912d-4cfc-a62a-0e1a049dec21" 00:10:50.140 ], 00:10:50.140 "assigned_rate_limits": { 00:10:50.140 "r_mbytes_per_sec": 0, 00:10:50.140 "rw_ios_per_sec": 0, 00:10:50.140 "rw_mbytes_per_sec": 0, 00:10:50.140 "w_mbytes_per_sec": 0 00:10:50.140 }, 00:10:50.140 "block_size": 512, 00:10:50.140 "claim_type": "exclusive_write", 00:10:50.140 "claimed": true, 00:10:50.140 "driver_specific": {}, 00:10:50.140 "memory_domains": [ 00:10:50.140 { 00:10:50.140 "dma_device_id": "system", 00:10:50.140 "dma_device_type": 1 00:10:50.140 }, 00:10:50.140 { 00:10:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.140 "dma_device_type": 2 00:10:50.140 } 00:10:50.140 ], 00:10:50.140 "name": "Malloc0", 00:10:50.140 "num_blocks": 16384, 00:10:50.140 "product_name": "Malloc disk", 00:10:50.140 "supported_io_types": { 00:10:50.140 "abort": true, 00:10:50.140 "compare": false, 00:10:50.140 "compare_and_write": false, 00:10:50.140 "flush": true, 00:10:50.140 "nvme_admin": false, 00:10:50.140 "nvme_io": false, 00:10:50.140 "read": true, 00:10:50.140 "reset": true, 00:10:50.140 "unmap": true, 00:10:50.140 "write": true, 00:10:50.140 "write_zeroes": true 00:10:50.140 }, 00:10:50.140 "uuid": "e74e4e74-912d-4cfc-a62a-0e1a049dec21", 00:10:50.140 "zoned": false 00:10:50.140 }, 00:10:50.140 { 00:10:50.140 "aliases": [ 00:10:50.140 "429c3864-a960-5677-84b0-918c4bed22b4" 00:10:50.140 ], 00:10:50.140 "assigned_rate_limits": { 00:10:50.140 "r_mbytes_per_sec": 0, 00:10:50.140 "rw_ios_per_sec": 0, 00:10:50.140 "rw_mbytes_per_sec": 0, 00:10:50.140 "w_mbytes_per_sec": 0 00:10:50.140 }, 00:10:50.140 "block_size": 512, 00:10:50.140 "claimed": false, 00:10:50.140 "driver_specific": { 00:10:50.140 "passthru": { 00:10:50.140 "base_bdev_name": "Malloc0", 00:10:50.140 "name": "Passthru0" 00:10:50.140 } 00:10:50.140 }, 00:10:50.140 "memory_domains": [ 00:10:50.140 { 00:10:50.140 "dma_device_id": "system", 00:10:50.140 "dma_device_type": 1 00:10:50.140 }, 00:10:50.140 { 00:10:50.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.140 "dma_device_type": 2 00:10:50.140 } 00:10:50.140 ], 00:10:50.140 "name": "Passthru0", 00:10:50.140 "num_blocks": 16384, 00:10:50.141 "product_name": "passthru", 00:10:50.141 "supported_io_types": { 00:10:50.141 "abort": true, 00:10:50.141 "compare": false, 00:10:50.141 "compare_and_write": false, 00:10:50.141 "flush": true, 00:10:50.141 "nvme_admin": false, 00:10:50.141 "nvme_io": false, 00:10:50.141 "read": true, 00:10:50.141 "reset": true, 00:10:50.141 "unmap": true, 00:10:50.141 "write": true, 00:10:50.141 "write_zeroes": true 00:10:50.141 }, 00:10:50.141 "uuid": "429c3864-a960-5677-84b0-918c4bed22b4", 00:10:50.141 "zoned": false 00:10:50.141 } 00:10:50.141 ]' 00:10:50.141 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:50.141 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:50.141 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.141 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.141 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.141 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.141 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:50.399 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:50.399 09:51:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:50.399 00:10:50.399 real 0m0.290s 00:10:50.399 user 0m0.154s 00:10:50.399 sys 0m0.055s 00:10:50.399 ************************************ 00:10:50.399 END TEST rpc_integrity 00:10:50.399 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:50.399 09:51:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:50.399 ************************************ 00:10:50.399 09:51:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:10:50.399 09:51:27 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:50.399 09:51:27 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:50.399 09:51:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.399 ************************************ 00:10:50.399 START TEST rpc_plugins 00:10:50.399 ************************************ 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:10:50.399 { 00:10:50.399 "aliases": [ 00:10:50.399 "ca7373e8-d63b-4b01-a904-033c18bb7fe1" 00:10:50.399 ], 00:10:50.399 "assigned_rate_limits": { 00:10:50.399 "r_mbytes_per_sec": 0, 00:10:50.399 "rw_ios_per_sec": 0, 00:10:50.399 "rw_mbytes_per_sec": 0, 00:10:50.399 "w_mbytes_per_sec": 0 00:10:50.399 }, 00:10:50.399 "block_size": 4096, 00:10:50.399 "claimed": false, 00:10:50.399 "driver_specific": {}, 00:10:50.399 "memory_domains": [ 00:10:50.399 { 00:10:50.399 "dma_device_id": "system", 00:10:50.399 "dma_device_type": 1 00:10:50.399 }, 00:10:50.399 { 00:10:50.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.399 "dma_device_type": 2 00:10:50.399 } 00:10:50.399 ], 00:10:50.399 "name": "Malloc1", 00:10:50.399 "num_blocks": 256, 00:10:50.399 "product_name": "Malloc disk", 00:10:50.399 "supported_io_types": { 00:10:50.399 "abort": true, 00:10:50.399 "compare": false, 00:10:50.399 "compare_and_write": false, 00:10:50.399 "flush": true, 00:10:50.399 "nvme_admin": false, 00:10:50.399 "nvme_io": false, 00:10:50.399 "read": true, 00:10:50.399 "reset": true, 00:10:50.399 "unmap": true, 00:10:50.399 "write": true, 00:10:50.399 "write_zeroes": true 00:10:50.399 }, 00:10:50.399 "uuid": "ca7373e8-d63b-4b01-a904-033c18bb7fe1", 00:10:50.399 "zoned": false 00:10:50.399 } 00:10:50.399 ]' 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:10:50.399 09:51:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:10:50.399 00:10:50.399 real 0m0.150s 00:10:50.399 user 0m0.095s 00:10:50.399 sys 0m0.021s 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:50.399 09:51:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:10:50.399 ************************************ 00:10:50.399 END TEST rpc_plugins 00:10:50.399 ************************************ 00:10:50.657 09:51:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:10:50.657 09:51:27 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:50.657 09:51:27 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:50.657 09:51:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.657 ************************************ 00:10:50.657 START TEST rpc_trace_cmd_test 00:10:50.657 ************************************ 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:10:50.657 "bdev": { 00:10:50.657 "mask": "0x8", 00:10:50.657 "tpoint_mask": "0xffffffffffffffff" 00:10:50.657 }, 00:10:50.657 "bdev_nvme": { 00:10:50.657 "mask": "0x4000", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "blobfs": { 00:10:50.657 "mask": "0x80", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "dsa": { 00:10:50.657 "mask": "0x200", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "ftl": { 00:10:50.657 "mask": "0x40", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "iaa": { 00:10:50.657 "mask": "0x1000", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "iscsi_conn": { 00:10:50.657 "mask": "0x2", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "nvme_pcie": { 00:10:50.657 "mask": "0x800", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "nvme_tcp": { 00:10:50.657 "mask": "0x2000", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "nvmf_rdma": { 00:10:50.657 "mask": "0x10", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "nvmf_tcp": { 00:10:50.657 "mask": "0x20", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "scsi": { 00:10:50.657 "mask": "0x4", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "sock": { 00:10:50.657 "mask": "0x8000", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "thread": { 00:10:50.657 "mask": "0x400", 00:10:50.657 "tpoint_mask": "0x0" 00:10:50.657 }, 00:10:50.657 "tpoint_group_mask": "0x8", 00:10:50.657 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid59933" 00:10:50.657 }' 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:10:50.657 09:51:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:10:50.657 09:51:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:10:50.657 09:51:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:10:50.915 09:51:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:10:50.915 00:10:50.915 real 0m0.264s 00:10:50.915 user 0m0.209s 00:10:50.916 sys 0m0.044s 00:10:50.916 09:51:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:50.916 09:51:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.916 ************************************ 00:10:50.916 END TEST rpc_trace_cmd_test 00:10:50.916 ************************************ 00:10:50.916 09:51:28 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:10:50.916 09:51:28 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:10:50.916 09:51:28 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:50.916 09:51:28 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:50.916 09:51:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.916 ************************************ 00:10:50.916 START TEST go_rpc 00:10:50.916 ************************************ 00:10:50.916 09:51:28 rpc.go_rpc -- common/autotest_common.sh@1122 -- # go_rpc 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:10:50.916 09:51:28 rpc.go_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.916 09:51:28 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.916 09:51:28 rpc.go_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["9a6422cb-846e-4971-97a2-ba81852ea37f"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"9a6422cb-846e-4971-97a2-ba81852ea37f","zoned":false}]' 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:10:50.916 09:51:28 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:10:50.916 09:51:28 rpc.go_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:50.916 09:51:28 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.173 09:51:28 rpc.go_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.173 09:51:28 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:10:51.173 09:51:28 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:10:51.173 09:51:28 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:10:51.173 09:51:28 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:10:51.173 00:10:51.173 real 0m0.219s 00:10:51.173 user 0m0.134s 00:10:51.173 sys 0m0.050s 00:10:51.173 09:51:28 rpc.go_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:51.173 09:51:28 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.173 ************************************ 00:10:51.173 END TEST go_rpc 00:10:51.173 ************************************ 00:10:51.173 09:51:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:10:51.173 09:51:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:10:51.173 09:51:28 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:51.173 09:51:28 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:51.173 09:51:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.173 ************************************ 00:10:51.173 START TEST rpc_daemon_integrity 00:10:51.173 ************************************ 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:10:51.173 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:10:51.174 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.174 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.174 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.174 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:10:51.174 { 00:10:51.174 "aliases": [ 00:10:51.174 "77d04740-4994-4394-9c69-df71403a0980" 00:10:51.174 ], 00:10:51.174 "assigned_rate_limits": { 00:10:51.174 "r_mbytes_per_sec": 0, 00:10:51.174 "rw_ios_per_sec": 0, 00:10:51.174 "rw_mbytes_per_sec": 0, 00:10:51.174 "w_mbytes_per_sec": 0 00:10:51.174 }, 00:10:51.174 "block_size": 512, 00:10:51.174 "claimed": false, 00:10:51.174 "driver_specific": {}, 00:10:51.174 "memory_domains": [ 00:10:51.174 { 00:10:51.174 "dma_device_id": "system", 00:10:51.174 "dma_device_type": 1 00:10:51.174 }, 00:10:51.174 { 00:10:51.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.174 "dma_device_type": 2 00:10:51.174 } 00:10:51.174 ], 00:10:51.174 "name": "Malloc3", 00:10:51.174 "num_blocks": 16384, 00:10:51.174 "product_name": "Malloc disk", 00:10:51.174 "supported_io_types": { 00:10:51.174 "abort": true, 00:10:51.174 "compare": false, 00:10:51.174 "compare_and_write": false, 00:10:51.174 "flush": true, 00:10:51.174 "nvme_admin": false, 00:10:51.174 "nvme_io": false, 00:10:51.174 "read": true, 00:10:51.174 "reset": true, 00:10:51.174 "unmap": true, 00:10:51.174 "write": true, 00:10:51.174 "write_zeroes": true 00:10:51.174 }, 00:10:51.174 "uuid": "77d04740-4994-4394-9c69-df71403a0980", 00:10:51.174 "zoned": false 00:10:51.174 } 00:10:51.174 ]' 00:10:51.174 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.431 [2024-05-15 09:51:28.567827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:51.431 [2024-05-15 09:51:28.567890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.431 [2024-05-15 09:51:28.567914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x138eaf0 00:10:51.431 [2024-05-15 09:51:28.567925] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.431 [2024-05-15 09:51:28.569622] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.431 [2024-05-15 09:51:28.569658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:10:51.431 Passthru0 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:10:51.431 { 00:10:51.431 "aliases": [ 00:10:51.431 "77d04740-4994-4394-9c69-df71403a0980" 00:10:51.431 ], 00:10:51.431 "assigned_rate_limits": { 00:10:51.431 "r_mbytes_per_sec": 0, 00:10:51.431 "rw_ios_per_sec": 0, 00:10:51.431 "rw_mbytes_per_sec": 0, 00:10:51.431 "w_mbytes_per_sec": 0 00:10:51.431 }, 00:10:51.431 "block_size": 512, 00:10:51.431 "claim_type": "exclusive_write", 00:10:51.431 "claimed": true, 00:10:51.431 "driver_specific": {}, 00:10:51.431 "memory_domains": [ 00:10:51.431 { 00:10:51.431 "dma_device_id": "system", 00:10:51.431 "dma_device_type": 1 00:10:51.431 }, 00:10:51.431 { 00:10:51.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.431 "dma_device_type": 2 00:10:51.431 } 00:10:51.431 ], 00:10:51.431 "name": "Malloc3", 00:10:51.431 "num_blocks": 16384, 00:10:51.431 "product_name": "Malloc disk", 00:10:51.431 "supported_io_types": { 00:10:51.431 "abort": true, 00:10:51.431 "compare": false, 00:10:51.431 "compare_and_write": false, 00:10:51.431 "flush": true, 00:10:51.431 "nvme_admin": false, 00:10:51.431 "nvme_io": false, 00:10:51.431 "read": true, 00:10:51.431 "reset": true, 00:10:51.431 "unmap": true, 00:10:51.431 "write": true, 00:10:51.431 "write_zeroes": true 00:10:51.431 }, 00:10:51.431 "uuid": "77d04740-4994-4394-9c69-df71403a0980", 00:10:51.431 "zoned": false 00:10:51.431 }, 00:10:51.431 { 00:10:51.431 "aliases": [ 00:10:51.431 "135495ab-df9f-5691-8d39-31b0a44e808a" 00:10:51.431 ], 00:10:51.431 "assigned_rate_limits": { 00:10:51.431 "r_mbytes_per_sec": 0, 00:10:51.431 "rw_ios_per_sec": 0, 00:10:51.431 "rw_mbytes_per_sec": 0, 00:10:51.431 "w_mbytes_per_sec": 0 00:10:51.431 }, 00:10:51.431 "block_size": 512, 00:10:51.431 "claimed": false, 00:10:51.431 "driver_specific": { 00:10:51.431 "passthru": { 00:10:51.431 "base_bdev_name": "Malloc3", 00:10:51.431 "name": "Passthru0" 00:10:51.431 } 00:10:51.431 }, 00:10:51.431 "memory_domains": [ 00:10:51.431 { 00:10:51.431 "dma_device_id": "system", 00:10:51.431 "dma_device_type": 1 00:10:51.431 }, 00:10:51.431 { 00:10:51.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.431 "dma_device_type": 2 00:10:51.431 } 00:10:51.431 ], 00:10:51.431 "name": "Passthru0", 00:10:51.431 "num_blocks": 16384, 00:10:51.431 "product_name": "passthru", 00:10:51.431 "supported_io_types": { 00:10:51.431 "abort": true, 00:10:51.431 "compare": false, 00:10:51.431 "compare_and_write": false, 00:10:51.431 "flush": true, 00:10:51.431 "nvme_admin": false, 00:10:51.431 "nvme_io": false, 00:10:51.431 "read": true, 00:10:51.431 "reset": true, 00:10:51.431 "unmap": true, 00:10:51.431 "write": true, 00:10:51.431 "write_zeroes": true 00:10:51.431 }, 00:10:51.431 "uuid": "135495ab-df9f-5691-8d39-31b0a44e808a", 00:10:51.431 "zoned": false 00:10:51.431 } 00:10:51.431 ]' 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:10:51.431 00:10:51.431 real 0m0.315s 00:10:51.431 user 0m0.203s 00:10:51.431 sys 0m0.045s 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:51.431 09:51:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:10:51.431 ************************************ 00:10:51.431 END TEST rpc_daemon_integrity 00:10:51.431 ************************************ 00:10:51.431 09:51:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:10:51.431 09:51:28 rpc -- rpc/rpc.sh@84 -- # killprocess 59933 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@947 -- # '[' -z 59933 ']' 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@951 -- # kill -0 59933 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@952 -- # uname 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 59933 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:51.431 killing process with pid 59933 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 59933' 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@966 -- # kill 59933 00:10:51.431 09:51:28 rpc -- common/autotest_common.sh@971 -- # wait 59933 00:10:52.365 00:10:52.365 real 0m3.407s 00:10:52.365 user 0m4.153s 00:10:52.365 sys 0m1.052s 00:10:52.365 09:51:29 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:52.365 09:51:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.365 ************************************ 00:10:52.365 END TEST rpc 00:10:52.365 ************************************ 00:10:52.365 09:51:29 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:52.365 09:51:29 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:52.365 09:51:29 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:52.365 09:51:29 -- common/autotest_common.sh@10 -- # set +x 00:10:52.365 ************************************ 00:10:52.365 START TEST skip_rpc 00:10:52.365 ************************************ 00:10:52.365 09:51:29 skip_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:10:52.365 * Looking for test storage... 00:10:52.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:10:52.365 09:51:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:52.365 09:51:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:10:52.365 09:51:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:10:52.365 09:51:29 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:52.365 09:51:29 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:52.365 09:51:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.365 ************************************ 00:10:52.365 START TEST skip_rpc 00:10:52.365 ************************************ 00:10:52.365 09:51:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:10:52.366 09:51:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60195 00:10:52.366 09:51:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:10:52.366 09:51:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:52.366 09:51:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:10:52.366 [2024-05-15 09:51:29.700570] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:10:52.366 [2024-05-15 09:51:29.701134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60195 ] 00:10:52.624 [2024-05-15 09:51:29.848854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.881 [2024-05-15 09:51:30.032867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.139 2024/05/15 09:51:34 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60195 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 60195 ']' 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 60195 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60195 00:10:58.139 killing process with pid 60195 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60195' 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 60195 00:10:58.139 09:51:34 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 60195 00:10:58.139 00:10:58.139 real 0m5.703s 00:10:58.139 user 0m5.181s 00:10:58.139 sys 0m0.417s 00:10:58.139 09:51:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:58.139 ************************************ 00:10:58.139 END TEST skip_rpc 00:10:58.139 ************************************ 00:10:58.139 09:51:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.139 09:51:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:10:58.139 09:51:35 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:10:58.139 09:51:35 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:58.139 09:51:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.139 ************************************ 00:10:58.139 START TEST skip_rpc_with_json 00:10:58.139 ************************************ 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60293 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60293 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 60293 ']' 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:10:58.139 09:51:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:58.139 [2024-05-15 09:51:35.445299] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:10:58.139 [2024-05-15 09:51:35.446340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60293 ] 00:10:58.396 [2024-05-15 09:51:35.586480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.396 [2024-05-15 09:51:35.764582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:59.332 [2024-05-15 09:51:36.498430] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:10:59.332 2024/05/15 09:51:36 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:10:59.332 request: 00:10:59.332 { 00:10:59.332 "method": "nvmf_get_transports", 00:10:59.332 "params": { 00:10:59.332 "trtype": "tcp" 00:10:59.332 } 00:10:59.332 } 00:10:59.332 Got JSON-RPC error response 00:10:59.332 GoRPCClient: error on JSON-RPC call 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:59.332 [2024-05-15 09:51:36.510541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:10:59.332 09:51:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:59.332 { 00:10:59.332 "subsystems": [ 00:10:59.332 { 00:10:59.333 "subsystem": "keyring", 00:10:59.333 "config": [] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "iobuf", 00:10:59.333 "config": [ 00:10:59.333 { 00:10:59.333 "method": "iobuf_set_options", 00:10:59.333 "params": { 00:10:59.333 "large_bufsize": 135168, 00:10:59.333 "large_pool_count": 1024, 00:10:59.333 "small_bufsize": 8192, 00:10:59.333 "small_pool_count": 8192 00:10:59.333 } 00:10:59.333 } 00:10:59.333 ] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "sock", 00:10:59.333 "config": [ 00:10:59.333 { 00:10:59.333 "method": "sock_impl_set_options", 00:10:59.333 "params": { 00:10:59.333 "enable_ktls": false, 00:10:59.333 "enable_placement_id": 0, 00:10:59.333 "enable_quickack": false, 00:10:59.333 "enable_recv_pipe": true, 00:10:59.333 "enable_zerocopy_send_client": false, 00:10:59.333 "enable_zerocopy_send_server": true, 00:10:59.333 "impl_name": "posix", 00:10:59.333 "recv_buf_size": 2097152, 00:10:59.333 "send_buf_size": 2097152, 00:10:59.333 "tls_version": 0, 00:10:59.333 "zerocopy_threshold": 0 00:10:59.333 } 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "method": "sock_impl_set_options", 00:10:59.333 "params": { 00:10:59.333 "enable_ktls": false, 00:10:59.333 "enable_placement_id": 0, 00:10:59.333 "enable_quickack": false, 00:10:59.333 "enable_recv_pipe": true, 00:10:59.333 "enable_zerocopy_send_client": false, 00:10:59.333 "enable_zerocopy_send_server": true, 00:10:59.333 "impl_name": "ssl", 00:10:59.333 "recv_buf_size": 4096, 00:10:59.333 "send_buf_size": 4096, 00:10:59.333 "tls_version": 0, 00:10:59.333 "zerocopy_threshold": 0 00:10:59.333 } 00:10:59.333 } 00:10:59.333 ] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "vmd", 00:10:59.333 "config": [] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "accel", 00:10:59.333 "config": [ 00:10:59.333 { 00:10:59.333 "method": "accel_set_options", 00:10:59.333 "params": { 00:10:59.333 "buf_count": 2048, 00:10:59.333 "large_cache_size": 16, 00:10:59.333 "sequence_count": 2048, 00:10:59.333 "small_cache_size": 128, 00:10:59.333 "task_count": 2048 00:10:59.333 } 00:10:59.333 } 00:10:59.333 ] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "bdev", 00:10:59.333 "config": [ 00:10:59.333 { 00:10:59.333 "method": "bdev_set_options", 00:10:59.333 "params": { 00:10:59.333 "bdev_auto_examine": true, 00:10:59.333 "bdev_io_cache_size": 256, 00:10:59.333 "bdev_io_pool_size": 65535, 00:10:59.333 "iobuf_large_cache_size": 16, 00:10:59.333 "iobuf_small_cache_size": 128 00:10:59.333 } 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "method": "bdev_raid_set_options", 00:10:59.333 "params": { 00:10:59.333 "process_window_size_kb": 1024 00:10:59.333 } 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "method": "bdev_iscsi_set_options", 00:10:59.333 "params": { 00:10:59.333 "timeout_sec": 30 00:10:59.333 } 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "method": "bdev_nvme_set_options", 00:10:59.333 "params": { 00:10:59.333 "action_on_timeout": "none", 00:10:59.333 "allow_accel_sequence": false, 00:10:59.333 "arbitration_burst": 0, 00:10:59.333 "bdev_retry_count": 3, 00:10:59.333 "ctrlr_loss_timeout_sec": 0, 00:10:59.333 "delay_cmd_submit": true, 00:10:59.333 "dhchap_dhgroups": [ 00:10:59.333 "null", 00:10:59.333 "ffdhe2048", 00:10:59.333 "ffdhe3072", 00:10:59.333 "ffdhe4096", 00:10:59.333 "ffdhe6144", 00:10:59.333 "ffdhe8192" 00:10:59.333 ], 00:10:59.333 "dhchap_digests": [ 00:10:59.333 "sha256", 00:10:59.333 "sha384", 00:10:59.333 "sha512" 00:10:59.333 ], 00:10:59.333 "disable_auto_failback": false, 00:10:59.333 "fast_io_fail_timeout_sec": 0, 00:10:59.333 "generate_uuids": false, 00:10:59.333 "high_priority_weight": 0, 00:10:59.333 "io_path_stat": false, 00:10:59.333 "io_queue_requests": 0, 00:10:59.333 "keep_alive_timeout_ms": 10000, 00:10:59.333 "low_priority_weight": 0, 00:10:59.333 "medium_priority_weight": 0, 00:10:59.333 "nvme_adminq_poll_period_us": 10000, 00:10:59.333 "nvme_error_stat": false, 00:10:59.333 "nvme_ioq_poll_period_us": 0, 00:10:59.333 "rdma_cm_event_timeout_ms": 0, 00:10:59.333 "rdma_max_cq_size": 0, 00:10:59.333 "rdma_srq_size": 0, 00:10:59.333 "reconnect_delay_sec": 0, 00:10:59.333 "timeout_admin_us": 0, 00:10:59.333 "timeout_us": 0, 00:10:59.333 "transport_ack_timeout": 0, 00:10:59.333 "transport_retry_count": 4, 00:10:59.333 "transport_tos": 0 00:10:59.333 } 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "method": "bdev_nvme_set_hotplug", 00:10:59.333 "params": { 00:10:59.333 "enable": false, 00:10:59.333 "period_us": 100000 00:10:59.333 } 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "method": "bdev_wait_for_examine" 00:10:59.333 } 00:10:59.333 ] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "scsi", 00:10:59.333 "config": null 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "scheduler", 00:10:59.333 "config": [ 00:10:59.333 { 00:10:59.333 "method": "framework_set_scheduler", 00:10:59.333 "params": { 00:10:59.333 "name": "static" 00:10:59.333 } 00:10:59.333 } 00:10:59.333 ] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "vhost_scsi", 00:10:59.333 "config": [] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "vhost_blk", 00:10:59.333 "config": [] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "ublk", 00:10:59.333 "config": [] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "nbd", 00:10:59.333 "config": [] 00:10:59.333 }, 00:10:59.333 { 00:10:59.333 "subsystem": "nvmf", 00:10:59.333 "config": [ 00:10:59.333 { 00:10:59.333 "method": "nvmf_set_config", 00:10:59.333 "params": { 00:10:59.333 "admin_cmd_passthru": { 00:10:59.333 "identify_ctrlr": false 00:10:59.333 }, 00:10:59.333 "discovery_filter": "match_any" 00:10:59.333 } 00:10:59.333 }, 00:10:59.334 { 00:10:59.334 "method": "nvmf_set_max_subsystems", 00:10:59.334 "params": { 00:10:59.334 "max_subsystems": 1024 00:10:59.334 } 00:10:59.334 }, 00:10:59.334 { 00:10:59.334 "method": "nvmf_set_crdt", 00:10:59.334 "params": { 00:10:59.334 "crdt1": 0, 00:10:59.334 "crdt2": 0, 00:10:59.334 "crdt3": 0 00:10:59.334 } 00:10:59.334 }, 00:10:59.334 { 00:10:59.334 "method": "nvmf_create_transport", 00:10:59.334 "params": { 00:10:59.334 "abort_timeout_sec": 1, 00:10:59.334 "ack_timeout": 0, 00:10:59.334 "buf_cache_size": 4294967295, 00:10:59.334 "c2h_success": true, 00:10:59.334 "data_wr_pool_size": 0, 00:10:59.334 "dif_insert_or_strip": false, 00:10:59.334 "in_capsule_data_size": 4096, 00:10:59.334 "io_unit_size": 131072, 00:10:59.334 "max_aq_depth": 128, 00:10:59.334 "max_io_qpairs_per_ctrlr": 127, 00:10:59.334 "max_io_size": 131072, 00:10:59.334 "max_queue_depth": 128, 00:10:59.334 "num_shared_buffers": 511, 00:10:59.334 "sock_priority": 0, 00:10:59.334 "trtype": "TCP", 00:10:59.334 "zcopy": false 00:10:59.334 } 00:10:59.334 } 00:10:59.334 ] 00:10:59.334 }, 00:10:59.334 { 00:10:59.334 "subsystem": "iscsi", 00:10:59.334 "config": [ 00:10:59.334 { 00:10:59.334 "method": "iscsi_set_options", 00:10:59.334 "params": { 00:10:59.334 "allow_duplicated_isid": false, 00:10:59.334 "chap_group": 0, 00:10:59.334 "data_out_pool_size": 2048, 00:10:59.334 "default_time2retain": 20, 00:10:59.334 "default_time2wait": 2, 00:10:59.334 "disable_chap": false, 00:10:59.334 "error_recovery_level": 0, 00:10:59.334 "first_burst_length": 8192, 00:10:59.334 "immediate_data": true, 00:10:59.334 "immediate_data_pool_size": 16384, 00:10:59.334 "max_connections_per_session": 2, 00:10:59.334 "max_large_datain_per_connection": 64, 00:10:59.334 "max_queue_depth": 64, 00:10:59.334 "max_r2t_per_connection": 4, 00:10:59.334 "max_sessions": 128, 00:10:59.334 "mutual_chap": false, 00:10:59.334 "node_base": "iqn.2016-06.io.spdk", 00:10:59.334 "nop_in_interval": 30, 00:10:59.334 "nop_timeout": 60, 00:10:59.334 "pdu_pool_size": 36864, 00:10:59.334 "require_chap": false 00:10:59.334 } 00:10:59.334 } 00:10:59.334 ] 00:10:59.334 } 00:10:59.334 ] 00:10:59.334 } 00:10:59.334 09:51:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:59.334 09:51:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60293 00:10:59.334 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 60293 ']' 00:10:59.334 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 60293 00:10:59.334 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:10:59.334 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:10:59.334 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60293 00:10:59.593 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:10:59.593 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:10:59.593 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60293' 00:10:59.593 killing process with pid 60293 00:10:59.593 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 60293 00:10:59.593 09:51:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 60293 00:11:00.158 09:51:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60338 00:11:00.158 09:51:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:00.158 09:51:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60338 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 60338 ']' 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 60338 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60338 00:11:05.446 killing process with pid 60338 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60338' 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 60338 00:11:05.446 09:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 60338 00:11:05.703 09:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:05.703 09:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:05.703 ************************************ 00:11:05.703 END TEST skip_rpc_with_json 00:11:05.703 ************************************ 00:11:05.703 00:11:05.703 real 0m7.668s 00:11:05.703 user 0m7.152s 00:11:05.703 sys 0m0.977s 00:11:05.703 09:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:05.703 09:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:05.961 09:51:43 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:05.961 09:51:43 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:05.961 09:51:43 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:05.961 09:51:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.961 ************************************ 00:11:05.961 START TEST skip_rpc_with_delay 00:11:05.961 ************************************ 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:05.961 [2024-05-15 09:51:43.181875] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:05.961 [2024-05-15 09:51:43.182408] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:05.961 00:11:05.961 real 0m0.106s 00:11:05.961 user 0m0.055s 00:11:05.961 sys 0m0.047s 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:05.961 09:51:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:05.961 ************************************ 00:11:05.961 END TEST skip_rpc_with_delay 00:11:05.961 ************************************ 00:11:05.961 09:51:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:05.961 09:51:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:05.961 09:51:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:05.961 09:51:43 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:05.961 09:51:43 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:05.961 09:51:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.961 ************************************ 00:11:05.961 START TEST exit_on_failed_rpc_init 00:11:05.961 ************************************ 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60453 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 60453 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 60453 ']' 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:05.961 09:51:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:06.218 [2024-05-15 09:51:43.344841] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:06.218 [2024-05-15 09:51:43.345254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60453 ] 00:11:06.218 [2024-05-15 09:51:43.491840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.476 [2024-05-15 09:51:43.666741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:07.410 09:51:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:07.410 [2024-05-15 09:51:44.582860] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:07.410 [2024-05-15 09:51:44.583343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60483 ] 00:11:07.410 [2024-05-15 09:51:44.747686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.667 [2024-05-15 09:51:44.917722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.667 [2024-05-15 09:51:44.918379] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:07.667 [2024-05-15 09:51:44.918656] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:07.667 [2024-05-15 09:51:44.918996] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 60453 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 60453 ']' 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 60453 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60453 00:11:07.925 killing process with pid 60453 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60453' 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 60453 00:11:07.925 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 60453 00:11:08.491 ************************************ 00:11:08.491 END TEST exit_on_failed_rpc_init 00:11:08.491 ************************************ 00:11:08.491 00:11:08.491 real 0m2.513s 00:11:08.491 user 0m2.977s 00:11:08.491 sys 0m0.644s 00:11:08.491 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:08.491 09:51:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:08.491 09:51:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:08.491 ************************************ 00:11:08.491 END TEST skip_rpc 00:11:08.491 ************************************ 00:11:08.491 00:11:08.491 real 0m16.320s 00:11:08.491 user 0m15.473s 00:11:08.491 sys 0m2.306s 00:11:08.491 09:51:45 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:08.491 09:51:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.749 09:51:45 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:08.749 09:51:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:08.749 09:51:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:08.749 09:51:45 -- common/autotest_common.sh@10 -- # set +x 00:11:08.749 ************************************ 00:11:08.749 START TEST rpc_client 00:11:08.749 ************************************ 00:11:08.749 09:51:45 rpc_client -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:08.749 * Looking for test storage... 00:11:08.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:08.750 09:51:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:08.750 OK 00:11:08.750 09:51:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:08.750 00:11:08.750 real 0m0.120s 00:11:08.750 user 0m0.053s 00:11:08.750 sys 0m0.073s 00:11:08.750 09:51:46 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:08.750 09:51:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:08.750 ************************************ 00:11:08.750 END TEST rpc_client 00:11:08.750 ************************************ 00:11:08.750 09:51:46 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:08.750 09:51:46 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:08.750 09:51:46 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:08.750 09:51:46 -- common/autotest_common.sh@10 -- # set +x 00:11:08.750 ************************************ 00:11:08.750 START TEST json_config 00:11:08.750 ************************************ 00:11:08.750 09:51:46 json_config -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:08.750 09:51:46 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.750 09:51:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.009 09:51:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.010 09:51:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.010 09:51:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.010 09:51:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.010 09:51:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.010 09:51:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.010 09:51:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.010 09:51:46 json_config -- paths/export.sh@5 -- # export PATH 00:11:09.010 09:51:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@47 -- # : 0 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.010 09:51:46 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:11:09.010 INFO: JSON configuration test init 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:09.010 09:51:46 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:11:09.010 09:51:46 json_config -- json_config/common.sh@9 -- # local app=target 00:11:09.010 09:51:46 json_config -- json_config/common.sh@10 -- # shift 00:11:09.010 09:51:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:09.010 09:51:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:09.010 09:51:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:09.010 09:51:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:09.010 09:51:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:09.010 09:51:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60607 00:11:09.010 09:51:46 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:09.010 09:51:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:09.010 Waiting for target to run... 00:11:09.010 09:51:46 json_config -- json_config/common.sh@25 -- # waitforlisten 60607 /var/tmp/spdk_tgt.sock 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@828 -- # '[' -z 60607 ']' 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:09.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:09.010 09:51:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:09.010 [2024-05-15 09:51:46.246958] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:09.010 [2024-05-15 09:51:46.247412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:11:09.599 [2024-05-15 09:51:46.828620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.599 [2024-05-15 09:51:46.945379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.165 00:11:10.165 09:51:47 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:10.165 09:51:47 json_config -- common/autotest_common.sh@861 -- # return 0 00:11:10.165 09:51:47 json_config -- json_config/common.sh@26 -- # echo '' 00:11:10.165 09:51:47 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:11:10.165 09:51:47 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:11:10.165 09:51:47 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:10.165 09:51:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.165 09:51:47 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:11:10.165 09:51:47 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:11:10.165 09:51:47 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:10.165 09:51:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.165 09:51:47 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:10.165 09:51:47 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:11:10.165 09:51:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:10.730 09:51:47 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:11:10.730 09:51:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:10.730 09:51:47 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:10.730 09:51:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.730 09:51:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:11:10.730 09:51:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:10.730 09:51:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:11:10.731 09:51:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:11:10.731 09:51:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:11:10.731 09:51:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@48 -- # local get_types 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:11:10.988 09:51:48 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:10.988 09:51:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@55 -- # return 0 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:11:10.988 09:51:48 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:10.988 09:51:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:11:10.988 09:51:48 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:10.988 09:51:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:11.251 MallocForNvmf0 00:11:11.251 09:51:48 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:11.251 09:51:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:11.510 MallocForNvmf1 00:11:11.510 09:51:48 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:11:11.510 09:51:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:11:12.114 [2024-05-15 09:51:49.328967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.114 09:51:49 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:12.114 09:51:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:12.678 09:51:49 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:12.678 09:51:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:12.936 09:51:50 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:12.936 09:51:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:13.501 09:51:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:13.501 09:51:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:13.759 [2024-05-15 09:51:50.893515] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:13.759 [2024-05-15 09:51:50.894162] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:13.759 09:51:50 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:11:13.759 09:51:50 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:13.759 09:51:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:13.759 09:51:50 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:11:13.759 09:51:50 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:13.759 09:51:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:13.759 09:51:50 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:11:13.759 09:51:51 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:13.759 09:51:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:14.017 MallocBdevForConfigChangeCheck 00:11:14.017 09:51:51 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:11:14.017 09:51:51 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:14.017 09:51:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:14.017 09:51:51 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:11:14.017 09:51:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:14.582 INFO: shutting down applications... 00:11:14.582 09:51:51 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:11:14.582 09:51:51 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:11:14.582 09:51:51 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:11:14.582 09:51:51 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:11:14.582 09:51:51 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:14.840 Calling clear_iscsi_subsystem 00:11:14.840 Calling clear_nvmf_subsystem 00:11:14.840 Calling clear_nbd_subsystem 00:11:14.840 Calling clear_ublk_subsystem 00:11:14.840 Calling clear_vhost_blk_subsystem 00:11:14.840 Calling clear_vhost_scsi_subsystem 00:11:14.840 Calling clear_bdev_subsystem 00:11:14.840 09:51:52 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:14.840 09:51:52 json_config -- json_config/json_config.sh@343 -- # count=100 00:11:14.840 09:51:52 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:11:14.840 09:51:52 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:14.840 09:51:52 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:11:14.840 09:51:52 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:15.461 09:51:52 json_config -- json_config/json_config.sh@345 -- # break 00:11:15.461 09:51:52 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:11:15.461 09:51:52 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:11:15.461 09:51:52 json_config -- json_config/common.sh@31 -- # local app=target 00:11:15.461 09:51:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:15.461 09:51:52 json_config -- json_config/common.sh@35 -- # [[ -n 60607 ]] 00:11:15.461 09:51:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 60607 00:11:15.461 [2024-05-15 09:51:52.678755] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:15.461 09:51:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:15.461 09:51:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:15.461 09:51:52 json_config -- json_config/common.sh@41 -- # kill -0 60607 00:11:15.461 09:51:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:11:16.025 09:51:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:11:16.025 09:51:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:16.025 09:51:53 json_config -- json_config/common.sh@41 -- # kill -0 60607 00:11:16.025 09:51:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:16.025 SPDK target shutdown done 00:11:16.025 09:51:53 json_config -- json_config/common.sh@43 -- # break 00:11:16.025 09:51:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:16.025 09:51:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:16.025 09:51:53 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:11:16.025 INFO: relaunching applications... 00:11:16.025 09:51:53 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:16.025 09:51:53 json_config -- json_config/common.sh@9 -- # local app=target 00:11:16.025 09:51:53 json_config -- json_config/common.sh@10 -- # shift 00:11:16.025 09:51:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:16.025 09:51:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:16.025 09:51:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:16.025 09:51:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:16.025 09:51:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:16.025 09:51:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=60892 00:11:16.025 09:51:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:16.025 Waiting for target to run... 00:11:16.025 09:51:53 json_config -- json_config/common.sh@25 -- # waitforlisten 60892 /var/tmp/spdk_tgt.sock 00:11:16.025 09:51:53 json_config -- common/autotest_common.sh@828 -- # '[' -z 60892 ']' 00:11:16.025 09:51:53 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:16.025 09:51:53 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:16.025 09:51:53 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:16.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:16.025 09:51:53 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:16.025 09:51:53 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:16.025 09:51:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:16.025 [2024-05-15 09:51:53.270555] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:16.025 [2024-05-15 09:51:53.271182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60892 ] 00:11:16.592 [2024-05-15 09:51:53.839043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.592 [2024-05-15 09:51:53.956929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.158 [2024-05-15 09:51:54.278437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.158 [2024-05-15 09:51:54.310334] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:17.158 [2024-05-15 09:51:54.310970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:17.158 00:11:17.158 INFO: Checking if target configuration is the same... 00:11:17.158 09:51:54 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:17.158 09:51:54 json_config -- common/autotest_common.sh@861 -- # return 0 00:11:17.158 09:51:54 json_config -- json_config/common.sh@26 -- # echo '' 00:11:17.158 09:51:54 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:11:17.158 09:51:54 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:11:17.158 09:51:54 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:17.158 09:51:54 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:11:17.158 09:51:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:17.158 + '[' 2 -ne 2 ']' 00:11:17.158 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:17.158 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:17.158 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:17.158 +++ basename /dev/fd/62 00:11:17.158 ++ mktemp /tmp/62.XXX 00:11:17.158 + tmp_file_1=/tmp/62.h9f 00:11:17.158 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:17.158 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:17.158 + tmp_file_2=/tmp/spdk_tgt_config.json.PS3 00:11:17.158 + ret=0 00:11:17.158 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:17.725 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:17.725 + diff -u /tmp/62.h9f /tmp/spdk_tgt_config.json.PS3 00:11:17.725 INFO: JSON config files are the same 00:11:17.725 + echo 'INFO: JSON config files are the same' 00:11:17.725 + rm /tmp/62.h9f /tmp/spdk_tgt_config.json.PS3 00:11:17.725 + exit 0 00:11:17.725 INFO: changing configuration and checking if this can be detected... 00:11:17.725 09:51:54 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:11:17.725 09:51:54 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:11:17.725 09:51:54 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:17.725 09:51:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:11:17.983 09:51:55 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:17.983 09:51:55 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:11:17.983 09:51:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:17.983 + '[' 2 -ne 2 ']' 00:11:17.983 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:11:17.983 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:11:17.983 + rootdir=/home/vagrant/spdk_repo/spdk 00:11:17.983 +++ basename /dev/fd/62 00:11:17.983 ++ mktemp /tmp/62.XXX 00:11:17.983 + tmp_file_1=/tmp/62.ioL 00:11:17.984 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:17.984 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:11:17.984 + tmp_file_2=/tmp/spdk_tgt_config.json.BVt 00:11:17.984 + ret=0 00:11:17.984 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:18.551 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:11:18.551 + diff -u /tmp/62.ioL /tmp/spdk_tgt_config.json.BVt 00:11:18.551 + ret=1 00:11:18.551 + echo '=== Start of file: /tmp/62.ioL ===' 00:11:18.551 + cat /tmp/62.ioL 00:11:18.551 + echo '=== End of file: /tmp/62.ioL ===' 00:11:18.551 + echo '' 00:11:18.551 + echo '=== Start of file: /tmp/spdk_tgt_config.json.BVt ===' 00:11:18.551 + cat /tmp/spdk_tgt_config.json.BVt 00:11:18.551 + echo '=== End of file: /tmp/spdk_tgt_config.json.BVt ===' 00:11:18.551 + echo '' 00:11:18.551 + rm /tmp/62.ioL /tmp/spdk_tgt_config.json.BVt 00:11:18.551 + exit 1 00:11:18.551 INFO: configuration change detected. 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@317 -- # [[ -n 60892 ]] 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@193 -- # uname -s 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:18.551 09:51:55 json_config -- json_config/json_config.sh@323 -- # killprocess 60892 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@947 -- # '[' -z 60892 ']' 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@951 -- # kill -0 60892 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@952 -- # uname 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 60892 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 60892' 00:11:18.551 killing process with pid 60892 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@966 -- # kill 60892 00:11:18.551 [2024-05-15 09:51:55.875407] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:18.551 09:51:55 json_config -- common/autotest_common.sh@971 -- # wait 60892 00:11:19.128 09:51:56 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:11:19.128 09:51:56 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:11:19.128 09:51:56 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:19.128 09:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:19.128 09:51:56 json_config -- json_config/json_config.sh@328 -- # return 0 00:11:19.128 09:51:56 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:11:19.128 INFO: Success 00:11:19.128 ************************************ 00:11:19.128 END TEST json_config 00:11:19.128 ************************************ 00:11:19.128 00:11:19.128 real 0m10.220s 00:11:19.128 user 0m14.930s 00:11:19.128 sys 0m2.495s 00:11:19.128 09:51:56 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:19.128 09:51:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:19.128 09:51:56 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:19.128 09:51:56 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:19.128 09:51:56 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:19.128 09:51:56 -- common/autotest_common.sh@10 -- # set +x 00:11:19.128 ************************************ 00:11:19.128 START TEST json_config_extra_key 00:11:19.128 ************************************ 00:11:19.128 09:51:56 json_config_extra_key -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:11:19.128 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:11:19.128 09:51:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.129 09:51:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.129 09:51:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.129 09:51:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.129 09:51:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.129 09:51:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.129 09:51:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.129 09:51:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:11:19.129 09:51:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.129 09:51:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:11:19.129 INFO: launching applications... 00:11:19.129 09:51:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61075 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:19.129 Waiting for target to run... 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61075 /var/tmp/spdk_tgt.sock 00:11:19.129 09:51:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:11:19.129 09:51:56 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 61075 ']' 00:11:19.129 09:51:56 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:19.129 09:51:56 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:19.129 09:51:56 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:19.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:19.129 09:51:56 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:19.129 09:51:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:19.388 [2024-05-15 09:51:56.521510] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:19.388 [2024-05-15 09:51:56.521942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:11:19.953 [2024-05-15 09:51:57.136214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.953 [2024-05-15 09:51:57.233441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.212 00:11:20.212 INFO: shutting down applications... 00:11:20.212 09:51:57 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:20.212 09:51:57 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:11:20.212 09:51:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:11:20.212 09:51:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:11:20.212 09:51:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:11:20.212 09:51:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:11:20.212 09:51:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:11:20.212 09:51:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61075 ]] 00:11:20.212 09:51:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61075 00:11:20.212 09:51:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:11:20.212 09:51:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:20.213 09:51:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61075 00:11:20.213 09:51:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:11:20.780 09:51:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:11:20.780 09:51:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:11:20.780 09:51:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61075 00:11:20.780 09:51:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:11:20.780 09:51:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:11:20.780 09:51:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:11:20.780 09:51:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:11:20.780 SPDK target shutdown done 00:11:20.780 09:51:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:11:20.780 Success 00:11:20.780 ************************************ 00:11:20.780 END TEST json_config_extra_key 00:11:20.780 ************************************ 00:11:20.780 00:11:20.780 real 0m1.726s 00:11:20.780 user 0m1.402s 00:11:20.780 sys 0m0.694s 00:11:20.780 09:51:58 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:20.780 09:51:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:11:20.780 09:51:58 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:20.780 09:51:58 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:20.780 09:51:58 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:20.780 09:51:58 -- common/autotest_common.sh@10 -- # set +x 00:11:20.780 ************************************ 00:11:20.780 START TEST alias_rpc 00:11:20.780 ************************************ 00:11:20.780 09:51:58 alias_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:11:21.038 * Looking for test storage... 00:11:21.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:11:21.038 09:51:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:21.038 09:51:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61151 00:11:21.038 09:51:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:21.038 09:51:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61151 00:11:21.038 09:51:58 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 61151 ']' 00:11:21.038 09:51:58 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.038 09:51:58 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:21.038 09:51:58 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.038 09:51:58 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:21.038 09:51:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.038 [2024-05-15 09:51:58.268783] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:21.038 [2024-05-15 09:51:58.269309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61151 ] 00:11:21.038 [2024-05-15 09:51:58.416657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.295 [2024-05-15 09:51:58.591688] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:11:22.248 09:51:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:11:22.248 09:51:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61151 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 61151 ']' 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 61151 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 61151 00:11:22.248 killing process with pid 61151 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 61151' 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@966 -- # kill 61151 00:11:22.248 09:51:59 alias_rpc -- common/autotest_common.sh@971 -- # wait 61151 00:11:23.182 ************************************ 00:11:23.182 END TEST alias_rpc 00:11:23.182 ************************************ 00:11:23.182 00:11:23.182 real 0m2.152s 00:11:23.182 user 0m2.269s 00:11:23.182 sys 0m0.622s 00:11:23.182 09:52:00 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:23.182 09:52:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 09:52:00 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:11:23.182 09:52:00 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:23.182 09:52:00 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:23.182 09:52:00 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:23.182 09:52:00 -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 ************************************ 00:11:23.182 START TEST dpdk_mem_utility 00:11:23.182 ************************************ 00:11:23.182 09:52:00 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:11:23.182 * Looking for test storage... 00:11:23.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:11:23.182 09:52:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:23.182 09:52:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61249 00:11:23.182 09:52:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:23.182 09:52:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61249 00:11:23.182 09:52:00 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 61249 ']' 00:11:23.182 09:52:00 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.182 09:52:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:23.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.182 09:52:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.182 09:52:00 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:23.182 09:52:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:23.182 [2024-05-15 09:52:00.509350] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:23.182 [2024-05-15 09:52:00.509777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61249 ] 00:11:23.440 [2024-05-15 09:52:00.650448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.698 [2024-05-15 09:52:00.828290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.265 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:24.265 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:11:24.265 09:52:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:11:24.265 09:52:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:11:24.265 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:24.265 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:24.265 { 00:11:24.265 "filename": "/tmp/spdk_mem_dump.txt" 00:11:24.265 } 00:11:24.265 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:24.265 09:52:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:11:24.265 DPDK memory size 814.000000 MiB in 1 heap(s) 00:11:24.265 1 heaps totaling size 814.000000 MiB 00:11:24.265 size: 814.000000 MiB heap id: 0 00:11:24.265 end heaps---------- 00:11:24.265 8 mempools totaling size 598.116089 MiB 00:11:24.265 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:11:24.265 size: 158.602051 MiB name: PDU_data_out_Pool 00:11:24.265 size: 84.521057 MiB name: bdev_io_61249 00:11:24.265 size: 51.011292 MiB name: evtpool_61249 00:11:24.265 size: 50.003479 MiB name: msgpool_61249 00:11:24.265 size: 21.763794 MiB name: PDU_Pool 00:11:24.265 size: 19.513306 MiB name: SCSI_TASK_Pool 00:11:24.265 size: 0.026123 MiB name: Session_Pool 00:11:24.265 end mempools------- 00:11:24.265 6 memzones totaling size 4.142822 MiB 00:11:24.265 size: 1.000366 MiB name: RG_ring_0_61249 00:11:24.265 size: 1.000366 MiB name: RG_ring_1_61249 00:11:24.265 size: 1.000366 MiB name: RG_ring_4_61249 00:11:24.265 size: 1.000366 MiB name: RG_ring_5_61249 00:11:24.265 size: 0.125366 MiB name: RG_ring_2_61249 00:11:24.265 size: 0.015991 MiB name: RG_ring_3_61249 00:11:24.265 end memzones------- 00:11:24.265 09:52:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:11:24.525 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:11:24.525 list of free elements. size: 12.471558 MiB 00:11:24.525 element at address: 0x200000400000 with size: 1.999512 MiB 00:11:24.525 element at address: 0x200018e00000 with size: 0.999878 MiB 00:11:24.525 element at address: 0x200019000000 with size: 0.999878 MiB 00:11:24.525 element at address: 0x200003e00000 with size: 0.996277 MiB 00:11:24.525 element at address: 0x200031c00000 with size: 0.994446 MiB 00:11:24.525 element at address: 0x200013800000 with size: 0.978699 MiB 00:11:24.525 element at address: 0x200007000000 with size: 0.959839 MiB 00:11:24.525 element at address: 0x200019200000 with size: 0.936584 MiB 00:11:24.525 element at address: 0x200000200000 with size: 0.833008 MiB 00:11:24.525 element at address: 0x20001aa00000 with size: 0.565308 MiB 00:11:24.525 element at address: 0x20000b200000 with size: 0.488892 MiB 00:11:24.525 element at address: 0x200000800000 with size: 0.486145 MiB 00:11:24.525 element at address: 0x200019400000 with size: 0.485657 MiB 00:11:24.525 element at address: 0x200027e00000 with size: 0.399597 MiB 00:11:24.525 element at address: 0x200003a00000 with size: 0.347839 MiB 00:11:24.525 list of standard malloc elements. size: 199.265869 MiB 00:11:24.525 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:11:24.525 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:11:24.525 element at address: 0x200018efff80 with size: 1.000122 MiB 00:11:24.525 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:11:24.525 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:11:24.525 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:11:24.525 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:11:24.525 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:11:24.525 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:11:24.525 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:11:24.525 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087c740 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087c800 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087c980 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59180 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59240 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59300 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59480 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59540 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59600 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59780 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59840 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59900 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003adb300 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003adb500 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003affa80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003affb40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:11:24.526 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:11:24.527 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e664c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e66580 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d180 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:11:24.527 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:11:24.527 list of memzone associated elements. size: 602.262573 MiB 00:11:24.527 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:11:24.527 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:11:24.527 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:11:24.527 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:11:24.527 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:11:24.527 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61249_0 00:11:24.527 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:11:24.527 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61249_0 00:11:24.527 element at address: 0x200003fff380 with size: 48.003052 MiB 00:11:24.527 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61249_0 00:11:24.527 element at address: 0x2000195be940 with size: 20.255554 MiB 00:11:24.527 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:11:24.527 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:11:24.527 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:11:24.527 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:11:24.527 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61249 00:11:24.527 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:11:24.527 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61249 00:11:24.527 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:11:24.527 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61249 00:11:24.527 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:11:24.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:11:24.527 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:11:24.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:11:24.527 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:11:24.527 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:11:24.527 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:11:24.527 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:11:24.527 element at address: 0x200003eff180 with size: 1.000488 MiB 00:11:24.528 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61249 00:11:24.528 element at address: 0x200003affc00 with size: 1.000488 MiB 00:11:24.528 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61249 00:11:24.528 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:11:24.528 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61249 00:11:24.528 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:11:24.528 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61249 00:11:24.528 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:11:24.528 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61249 00:11:24.528 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:11:24.528 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:11:24.528 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:11:24.528 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:11:24.528 element at address: 0x20001947c540 with size: 0.250488 MiB 00:11:24.528 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:11:24.528 element at address: 0x200003adf880 with size: 0.125488 MiB 00:11:24.528 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61249 00:11:24.528 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:11:24.528 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:11:24.528 element at address: 0x200027e66640 with size: 0.023743 MiB 00:11:24.528 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:11:24.528 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:11:24.528 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61249 00:11:24.528 element at address: 0x200027e6c780 with size: 0.002441 MiB 00:11:24.528 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:11:24.528 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:11:24.528 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61249 00:11:24.528 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:11:24.528 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61249 00:11:24.528 element at address: 0x200027e6d240 with size: 0.000305 MiB 00:11:24.528 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:11:24.528 09:52:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:11:24.528 09:52:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61249 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 61249 ']' 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 61249 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 61249 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 61249' 00:11:24.528 killing process with pid 61249 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 61249 00:11:24.528 09:52:01 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 61249 00:11:25.094 00:11:25.094 real 0m2.033s 00:11:25.094 user 0m2.038s 00:11:25.094 sys 0m0.626s 00:11:25.094 09:52:02 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:25.094 ************************************ 00:11:25.094 END TEST dpdk_mem_utility 00:11:25.094 ************************************ 00:11:25.094 09:52:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:11:25.094 09:52:02 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:25.094 09:52:02 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:25.094 09:52:02 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:25.094 09:52:02 -- common/autotest_common.sh@10 -- # set +x 00:11:25.094 ************************************ 00:11:25.094 START TEST event 00:11:25.094 ************************************ 00:11:25.094 09:52:02 event -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:11:25.351 * Looking for test storage... 00:11:25.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:25.351 09:52:02 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:25.352 09:52:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:11:25.352 09:52:02 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:25.352 09:52:02 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:11:25.352 09:52:02 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:25.352 09:52:02 event -- common/autotest_common.sh@10 -- # set +x 00:11:25.352 ************************************ 00:11:25.352 START TEST event_perf 00:11:25.352 ************************************ 00:11:25.352 09:52:02 event.event_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:11:25.352 Running I/O for 1 seconds...[2024-05-15 09:52:02.549573] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:25.352 [2024-05-15 09:52:02.549903] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61344 ] 00:11:25.352 [2024-05-15 09:52:02.705302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.609 [2024-05-15 09:52:02.895234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.609 [2024-05-15 09:52:02.895425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.609 [2024-05-15 09:52:02.895528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.609 [2024-05-15 09:52:02.895533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.981 Running I/O for 1 seconds... 00:11:26.981 lcore 0: 170252 00:11:26.981 lcore 1: 170250 00:11:26.981 lcore 2: 170252 00:11:26.981 lcore 3: 170253 00:11:26.981 done. 00:11:26.981 00:11:26.981 real 0m1.547s 00:11:26.981 user 0m4.315s 00:11:26.981 sys 0m0.101s 00:11:26.981 09:52:04 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:26.981 ************************************ 00:11:26.981 END TEST event_perf 00:11:26.981 ************************************ 00:11:26.981 09:52:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:11:26.981 09:52:04 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:26.981 09:52:04 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:11:26.981 09:52:04 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:26.981 09:52:04 event -- common/autotest_common.sh@10 -- # set +x 00:11:26.981 ************************************ 00:11:26.981 START TEST event_reactor 00:11:26.981 ************************************ 00:11:26.981 09:52:04 event.event_reactor -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:11:26.981 [2024-05-15 09:52:04.141218] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:26.981 [2024-05-15 09:52:04.141599] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61377 ] 00:11:26.981 [2024-05-15 09:52:04.279999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.240 [2024-05-15 09:52:04.456967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.616 test_start 00:11:28.616 oneshot 00:11:28.616 tick 100 00:11:28.616 tick 100 00:11:28.616 tick 250 00:11:28.616 tick 100 00:11:28.616 tick 100 00:11:28.616 tick 100 00:11:28.616 tick 250 00:11:28.616 tick 500 00:11:28.616 tick 100 00:11:28.616 tick 100 00:11:28.616 tick 250 00:11:28.616 tick 100 00:11:28.616 tick 100 00:11:28.616 test_end 00:11:28.616 00:11:28.616 real 0m1.504s 00:11:28.616 user 0m1.304s 00:11:28.616 sys 0m0.089s 00:11:28.616 09:52:05 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:28.616 09:52:05 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:11:28.616 ************************************ 00:11:28.616 END TEST event_reactor 00:11:28.616 ************************************ 00:11:28.616 09:52:05 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:28.616 09:52:05 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:11:28.616 09:52:05 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:28.616 09:52:05 event -- common/autotest_common.sh@10 -- # set +x 00:11:28.616 ************************************ 00:11:28.616 START TEST event_reactor_perf 00:11:28.616 ************************************ 00:11:28.616 09:52:05 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:11:28.616 [2024-05-15 09:52:05.705427] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:28.616 [2024-05-15 09:52:05.705862] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61418 ] 00:11:28.616 [2024-05-15 09:52:05.850376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.873 [2024-05-15 09:52:06.026077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.248 test_start 00:11:30.248 test_end 00:11:30.248 Performance: 374232 events per second 00:11:30.248 00:11:30.248 real 0m1.514s 00:11:30.248 user 0m1.322s 00:11:30.248 sys 0m0.079s 00:11:30.248 09:52:07 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:30.248 09:52:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:11:30.248 ************************************ 00:11:30.248 END TEST event_reactor_perf 00:11:30.248 ************************************ 00:11:30.248 09:52:07 event -- event/event.sh@49 -- # uname -s 00:11:30.248 09:52:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:11:30.248 09:52:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:30.248 09:52:07 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:30.248 09:52:07 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:30.248 09:52:07 event -- common/autotest_common.sh@10 -- # set +x 00:11:30.248 ************************************ 00:11:30.248 START TEST event_scheduler 00:11:30.248 ************************************ 00:11:30.248 09:52:07 event.event_scheduler -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:11:30.248 * Looking for test storage... 00:11:30.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:11:30.248 09:52:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:11:30.248 09:52:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=61479 00:11:30.248 09:52:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:11:30.248 09:52:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:11:30.248 09:52:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 61479 00:11:30.248 09:52:07 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 61479 ']' 00:11:30.248 09:52:07 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.248 09:52:07 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:30.248 09:52:07 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.248 09:52:07 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:30.248 09:52:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:30.248 [2024-05-15 09:52:07.409541] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:30.248 [2024-05-15 09:52:07.409913] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61479 ] 00:11:30.248 [2024-05-15 09:52:07.549272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.506 [2024-05-15 09:52:07.729980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.506 [2024-05-15 09:52:07.730061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.506 [2024-05-15 09:52:07.730133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.506 [2024-05-15 09:52:07.730138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:11:31.438 09:52:08 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 POWER: Env isn't set yet! 00:11:31.438 POWER: Attempting to initialise ACPI cpufreq power management... 00:11:31.438 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:31.438 POWER: Cannot set governor of lcore 0 to userspace 00:11:31.438 POWER: Attempting to initialise PSTAT power management... 00:11:31.438 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:31.438 POWER: Cannot set governor of lcore 0 to performance 00:11:31.438 POWER: Attempting to initialise AMD PSTATE power management... 00:11:31.438 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:31.438 POWER: Cannot set governor of lcore 0 to userspace 00:11:31.438 POWER: Attempting to initialise CPPC power management... 00:11:31.438 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:11:31.438 POWER: Cannot set governor of lcore 0 to userspace 00:11:31.438 POWER: Attempting to initialise VM power management... 00:11:31.438 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:11:31.438 POWER: Unable to set Power Management Environment for lcore 0 00:11:31.438 [2024-05-15 09:52:08.541967] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:11:31.438 [2024-05-15 09:52:08.542077] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:11:31.438 [2024-05-15 09:52:08.542138] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 [2024-05-15 09:52:08.684975] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 ************************************ 00:11:31.438 START TEST scheduler_create_thread 00:11:31.438 ************************************ 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 2 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 3 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 4 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 5 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 6 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 7 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 8 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 9 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:31.438 10 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:31.438 09:52:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:32.810 09:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:32.810 09:52:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:11:32.810 09:52:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:11:32.810 09:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:32.810 09:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:33.743 09:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:33.743 09:52:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:11:33.743 09:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:33.743 09:52:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:34.708 09:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:34.708 09:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:11:34.708 09:52:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:11:34.708 09:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:34.708 09:52:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:35.273 ************************************ 00:11:35.273 END TEST scheduler_create_thread 00:11:35.273 ************************************ 00:11:35.273 09:52:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.273 00:11:35.273 real 0m3.887s 00:11:35.273 user 0m0.016s 00:11:35.273 sys 0m0.009s 00:11:35.273 09:52:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:35.273 09:52:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:11:35.273 09:52:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:11:35.273 09:52:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 61479 00:11:35.273 09:52:12 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 61479 ']' 00:11:35.273 09:52:12 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 61479 00:11:35.273 09:52:12 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:11:35.273 09:52:12 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:35.273 09:52:12 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 61479 00:11:35.274 killing process with pid 61479 00:11:35.274 09:52:12 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:11:35.274 09:52:12 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:11:35.274 09:52:12 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 61479' 00:11:35.274 09:52:12 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 61479 00:11:35.274 09:52:12 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 61479 00:11:35.847 [2024-05-15 09:52:12.966188] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:11:36.113 ************************************ 00:11:36.113 END TEST event_scheduler 00:11:36.113 ************************************ 00:11:36.113 00:11:36.113 real 0m6.164s 00:11:36.113 user 0m13.114s 00:11:36.113 sys 0m0.496s 00:11:36.113 09:52:13 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:36.113 09:52:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:11:36.113 09:52:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:11:36.113 09:52:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:11:36.113 09:52:13 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:36.113 09:52:13 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:36.113 09:52:13 event -- common/autotest_common.sh@10 -- # set +x 00:11:36.113 ************************************ 00:11:36.113 START TEST app_repeat 00:11:36.113 ************************************ 00:11:36.113 09:52:13 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:11:36.114 09:52:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.114 09:52:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:36.114 09:52:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:11:36.114 09:52:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:36.114 09:52:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:11:36.114 09:52:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:11:36.114 09:52:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:11:36.378 09:52:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=61608 00:11:36.378 09:52:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:11:36.378 09:52:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:11:36.378 09:52:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61608' 00:11:36.378 Process app_repeat pid: 61608 00:11:36.378 09:52:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:36.378 09:52:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:11:36.378 spdk_app_start Round 0 00:11:36.378 09:52:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61608 /var/tmp/spdk-nbd.sock 00:11:36.378 09:52:13 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 61608 ']' 00:11:36.379 09:52:13 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:36.379 09:52:13 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:36.379 09:52:13 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:36.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:36.379 09:52:13 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:36.379 09:52:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:36.379 [2024-05-15 09:52:13.526858] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:36.379 [2024-05-15 09:52:13.527228] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61608 ] 00:11:36.379 [2024-05-15 09:52:13.674176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:36.646 [2024-05-15 09:52:13.861507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.646 [2024-05-15 09:52:13.861520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.239 09:52:14 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:37.239 09:52:14 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:11:37.239 09:52:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:37.808 Malloc0 00:11:37.808 09:52:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:38.066 Malloc1 00:11:38.066 09:52:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:38.066 09:52:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:38.067 09:52:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:38.324 /dev/nbd0 00:11:38.581 09:52:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:38.581 09:52:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:38.581 1+0 records in 00:11:38.581 1+0 records out 00:11:38.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563431 s, 7.3 MB/s 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:11:38.581 09:52:15 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:11:38.581 09:52:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.581 09:52:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:38.581 09:52:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:38.839 /dev/nbd1 00:11:38.839 09:52:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:38.839 09:52:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:38.839 1+0 records in 00:11:38.839 1+0 records out 00:11:38.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530087 s, 7.7 MB/s 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:11:38.839 09:52:16 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:11:38.839 09:52:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.839 09:52:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:38.839 09:52:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:38.839 09:52:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.839 09:52:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:39.097 09:52:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:39.097 { 00:11:39.097 "bdev_name": "Malloc0", 00:11:39.097 "nbd_device": "/dev/nbd0" 00:11:39.097 }, 00:11:39.097 { 00:11:39.097 "bdev_name": "Malloc1", 00:11:39.097 "nbd_device": "/dev/nbd1" 00:11:39.097 } 00:11:39.097 ]' 00:11:39.097 09:52:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:39.097 { 00:11:39.097 "bdev_name": "Malloc0", 00:11:39.097 "nbd_device": "/dev/nbd0" 00:11:39.097 }, 00:11:39.097 { 00:11:39.097 "bdev_name": "Malloc1", 00:11:39.097 "nbd_device": "/dev/nbd1" 00:11:39.097 } 00:11:39.097 ]' 00:11:39.097 09:52:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:39.354 09:52:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:39.354 /dev/nbd1' 00:11:39.354 09:52:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:39.355 /dev/nbd1' 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:39.355 256+0 records in 00:11:39.355 256+0 records out 00:11:39.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00947774 s, 111 MB/s 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:39.355 256+0 records in 00:11:39.355 256+0 records out 00:11:39.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295504 s, 35.5 MB/s 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:39.355 256+0 records in 00:11:39.355 256+0 records out 00:11:39.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307796 s, 34.1 MB/s 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.355 09:52:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:39.613 09:52:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:40.179 09:52:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:40.437 09:52:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:40.437 09:52:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:40.695 09:52:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:41.261 [2024-05-15 09:52:18.395301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.261 [2024-05-15 09:52:18.553983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.261 [2024-05-15 09:52:18.553985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.518 [2024-05-15 09:52:18.644327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:41.518 [2024-05-15 09:52:18.644721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:44.094 spdk_app_start Round 1 00:11:44.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:44.094 09:52:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:44.094 09:52:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:11:44.094 09:52:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61608 /var/tmp/spdk-nbd.sock 00:11:44.094 09:52:21 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 61608 ']' 00:11:44.094 09:52:21 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:44.094 09:52:21 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:44.094 09:52:21 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:44.094 09:52:21 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:44.094 09:52:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:44.353 09:52:21 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:44.353 09:52:21 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:11:44.353 09:52:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:44.659 Malloc0 00:11:44.659 09:52:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:44.916 Malloc1 00:11:44.916 09:52:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:44.916 09:52:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:45.175 /dev/nbd0 00:11:45.175 09:52:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:45.175 09:52:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:45.175 1+0 records in 00:11:45.175 1+0 records out 00:11:45.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705805 s, 5.8 MB/s 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:11:45.175 09:52:22 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:11:45.175 09:52:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.175 09:52:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:45.175 09:52:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:45.432 /dev/nbd1 00:11:45.432 09:52:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:45.432 09:52:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:45.432 1+0 records in 00:11:45.432 1+0 records out 00:11:45.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666657 s, 6.1 MB/s 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:11:45.432 09:52:22 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:11:45.432 09:52:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:45.433 09:52:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:45.433 09:52:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:45.433 09:52:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:45.433 09:52:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:45.691 09:52:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:45.691 { 00:11:45.691 "bdev_name": "Malloc0", 00:11:45.691 "nbd_device": "/dev/nbd0" 00:11:45.691 }, 00:11:45.691 { 00:11:45.691 "bdev_name": "Malloc1", 00:11:45.691 "nbd_device": "/dev/nbd1" 00:11:45.691 } 00:11:45.691 ]' 00:11:45.691 09:52:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:45.691 { 00:11:45.691 "bdev_name": "Malloc0", 00:11:45.691 "nbd_device": "/dev/nbd0" 00:11:45.691 }, 00:11:45.691 { 00:11:45.691 "bdev_name": "Malloc1", 00:11:45.691 "nbd_device": "/dev/nbd1" 00:11:45.691 } 00:11:45.691 ]' 00:11:45.691 09:52:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:45.691 /dev/nbd1' 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:45.691 /dev/nbd1' 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:45.691 256+0 records in 00:11:45.691 256+0 records out 00:11:45.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00742365 s, 141 MB/s 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:45.691 256+0 records in 00:11:45.691 256+0 records out 00:11:45.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343612 s, 30.5 MB/s 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:45.691 09:52:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:45.949 256+0 records in 00:11:45.949 256+0 records out 00:11:45.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349104 s, 30.0 MB/s 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.949 09:52:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.206 09:52:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:46.464 09:52:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:46.722 09:52:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:46.722 09:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:46.981 09:52:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:46.981 09:52:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:47.239 09:52:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:47.496 [2024-05-15 09:52:24.764724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:47.754 [2024-05-15 09:52:24.934490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.754 [2024-05-15 09:52:24.934495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.755 [2024-05-15 09:52:25.027332] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:47.755 [2024-05-15 09:52:25.027790] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:50.281 spdk_app_start Round 2 00:11:50.281 09:52:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:11:50.281 09:52:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:11:50.281 09:52:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 61608 /var/tmp/spdk-nbd.sock 00:11:50.281 09:52:27 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 61608 ']' 00:11:50.281 09:52:27 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:50.281 09:52:27 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:50.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:50.281 09:52:27 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:50.281 09:52:27 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:50.281 09:52:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:50.540 09:52:27 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:50.540 09:52:27 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:11:50.540 09:52:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:50.797 Malloc0 00:11:50.797 09:52:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:11:51.055 Malloc1 00:11:51.055 09:52:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:51.055 09:52:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:51.313 /dev/nbd0 00:11:51.313 09:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:51.313 09:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:51.313 1+0 records in 00:11:51.313 1+0 records out 00:11:51.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442531 s, 9.3 MB/s 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:11:51.313 09:52:28 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:11:51.313 09:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.313 09:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:51.313 09:52:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:11:51.570 /dev/nbd1 00:11:51.828 09:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:51.828 09:52:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:11:51.828 1+0 records in 00:11:51.828 1+0 records out 00:11:51.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000872534 s, 4.7 MB/s 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:11:51.828 09:52:28 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:11:51.828 09:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:51.828 09:52:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:51.828 09:52:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:51.828 09:52:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:51.828 09:52:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:52.085 { 00:11:52.085 "bdev_name": "Malloc0", 00:11:52.085 "nbd_device": "/dev/nbd0" 00:11:52.085 }, 00:11:52.085 { 00:11:52.085 "bdev_name": "Malloc1", 00:11:52.085 "nbd_device": "/dev/nbd1" 00:11:52.085 } 00:11:52.085 ]' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:52.085 { 00:11:52.085 "bdev_name": "Malloc0", 00:11:52.085 "nbd_device": "/dev/nbd0" 00:11:52.085 }, 00:11:52.085 { 00:11:52.085 "bdev_name": "Malloc1", 00:11:52.085 "nbd_device": "/dev/nbd1" 00:11:52.085 } 00:11:52.085 ]' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:52.085 /dev/nbd1' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:52.085 /dev/nbd1' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:11:52.085 256+0 records in 00:11:52.085 256+0 records out 00:11:52.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00926859 s, 113 MB/s 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:52.085 256+0 records in 00:11:52.085 256+0 records out 00:11:52.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267878 s, 39.1 MB/s 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:52.085 256+0 records in 00:11:52.085 256+0 records out 00:11:52.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314138 s, 33.4 MB/s 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.085 09:52:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.344 09:52:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.936 09:52:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:53.193 09:52:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:11:53.193 09:52:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:11:53.451 09:52:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:11:54.018 [2024-05-15 09:52:31.114046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:54.018 [2024-05-15 09:52:31.272713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.018 [2024-05-15 09:52:31.272715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.018 [2024-05-15 09:52:31.355719] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:11:54.018 [2024-05-15 09:52:31.356047] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:11:56.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:56.545 09:52:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 61608 /var/tmp/spdk-nbd.sock 00:11:56.545 09:52:33 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 61608 ']' 00:11:56.545 09:52:33 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:56.545 09:52:33 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:56.545 09:52:33 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:56.545 09:52:33 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:56.545 09:52:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:11:56.803 09:52:34 event.app_repeat -- event/event.sh@39 -- # killprocess 61608 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 61608 ']' 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 61608 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 61608 00:11:56.803 killing process with pid 61608 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 61608' 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@966 -- # kill 61608 00:11:56.803 09:52:34 event.app_repeat -- common/autotest_common.sh@971 -- # wait 61608 00:11:57.371 spdk_app_start is called in Round 0. 00:11:57.371 Shutdown signal received, stop current app iteration 00:11:57.371 Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 reinitialization... 00:11:57.371 spdk_app_start is called in Round 1. 00:11:57.371 Shutdown signal received, stop current app iteration 00:11:57.371 Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 reinitialization... 00:11:57.371 spdk_app_start is called in Round 2. 00:11:57.371 Shutdown signal received, stop current app iteration 00:11:57.371 Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 reinitialization... 00:11:57.371 spdk_app_start is called in Round 3. 00:11:57.371 Shutdown signal received, stop current app iteration 00:11:57.371 09:52:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:11:57.371 09:52:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:11:57.371 00:11:57.371 real 0m21.005s 00:11:57.371 user 0m46.190s 00:11:57.371 sys 0m4.245s 00:11:57.371 09:52:34 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:57.371 09:52:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 ************************************ 00:11:57.371 END TEST app_repeat 00:11:57.371 ************************************ 00:11:57.371 09:52:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:11:57.371 09:52:34 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:57.371 09:52:34 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:57.371 09:52:34 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:57.371 09:52:34 event -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 ************************************ 00:11:57.371 START TEST cpu_locks 00:11:57.371 ************************************ 00:11:57.371 09:52:34 event.cpu_locks -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:11:57.371 * Looking for test storage... 00:11:57.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:11:57.371 09:52:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:11:57.371 09:52:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:11:57.371 09:52:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:11:57.371 09:52:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:11:57.371 09:52:34 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:57.371 09:52:34 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:57.371 09:52:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:57.371 ************************************ 00:11:57.371 START TEST default_locks 00:11:57.371 ************************************ 00:11:57.371 09:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:11:57.371 09:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62260 00:11:57.371 09:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:57.372 09:52:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62260 00:11:57.372 09:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 62260 ']' 00:11:57.372 09:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.372 09:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:57.372 09:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.372 09:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:57.372 09:52:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:57.372 [2024-05-15 09:52:34.729649] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:57.372 [2024-05-15 09:52:34.730029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62260 ] 00:11:57.630 [2024-05-15 09:52:34.871895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.889 [2024-05-15 09:52:35.045030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.455 09:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:58.455 09:52:35 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:11:58.455 09:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62260 00:11:58.455 09:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:58.455 09:52:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62260 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62260 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 62260 ']' 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 62260 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62260 00:11:59.021 killing process with pid 62260 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62260' 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 62260 00:11:59.021 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 62260 00:11:59.954 09:52:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62260 00:11:59.954 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:11:59.954 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 62260 00:11:59.954 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:11:59.954 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:59.954 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:11:59.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.954 ERROR: process (pid: 62260) is no longer running 00:11:59.954 09:52:36 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:59.954 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 62260 00:11:59.954 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 62260 ']' 00:11:59.954 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (62260) - No such process 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:11:59.955 00:11:59.955 real 0m2.351s 00:11:59.955 user 0m2.436s 00:11:59.955 sys 0m0.788s 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:59.955 09:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 ************************************ 00:11:59.955 END TEST default_locks 00:11:59.955 ************************************ 00:11:59.955 09:52:37 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:11:59.955 09:52:37 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:59.955 09:52:37 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:59.955 09:52:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 ************************************ 00:11:59.955 START TEST default_locks_via_rpc 00:11:59.955 ************************************ 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62325 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62325 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 62325 ']' 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:59.955 09:52:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.955 [2024-05-15 09:52:37.122230] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:11:59.955 [2024-05-15 09:52:37.122551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62325 ] 00:11:59.955 [2024-05-15 09:52:37.259311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.212 [2024-05-15 09:52:37.420110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62325 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62325 00:12:01.144 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62325 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 62325 ']' 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 62325 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62325 00:12:01.401 killing process with pid 62325 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62325' 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 62325 00:12:01.401 09:52:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 62325 00:12:02.336 00:12:02.336 real 0m2.293s 00:12:02.336 user 0m2.344s 00:12:02.336 sys 0m0.792s 00:12:02.336 09:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:02.336 ************************************ 00:12:02.336 END TEST default_locks_via_rpc 00:12:02.336 ************************************ 00:12:02.336 09:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.336 09:52:39 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:02.336 09:52:39 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:02.336 09:52:39 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:02.336 09:52:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:02.336 ************************************ 00:12:02.336 START TEST non_locking_app_on_locked_coremask 00:12:02.336 ************************************ 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62400 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62400 /var/tmp/spdk.sock 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 62400 ']' 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:02.336 09:52:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:02.336 [2024-05-15 09:52:39.503356] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:02.336 [2024-05-15 09:52:39.503852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62400 ] 00:12:02.336 [2024-05-15 09:52:39.652645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.594 [2024-05-15 09:52:39.814850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62428 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62428 /var/tmp/spdk2.sock 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 62428 ']' 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:03.159 09:52:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:03.159 [2024-05-15 09:52:40.496258] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:03.159 [2024-05-15 09:52:40.496676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62428 ] 00:12:03.416 [2024-05-15 09:52:40.630684] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:03.416 [2024-05-15 09:52:40.630765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.673 [2024-05-15 09:52:40.965669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.613 09:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:04.613 09:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:12:04.613 09:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62400 00:12:04.613 09:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62400 00:12:04.613 09:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62400 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 62400 ']' 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 62400 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62400 00:12:05.546 killing process with pid 62400 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62400' 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 62400 00:12:05.546 09:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 62400 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62428 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 62428 ']' 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 62428 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62428 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62428' 00:12:06.920 killing process with pid 62428 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 62428 00:12:06.920 09:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 62428 00:12:07.488 ************************************ 00:12:07.488 END TEST non_locking_app_on_locked_coremask 00:12:07.488 ************************************ 00:12:07.488 00:12:07.488 real 0m5.164s 00:12:07.488 user 0m5.365s 00:12:07.488 sys 0m1.461s 00:12:07.488 09:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:07.488 09:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:07.488 09:52:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:07.488 09:52:44 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:07.488 09:52:44 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:07.488 09:52:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:07.488 ************************************ 00:12:07.488 START TEST locking_app_on_unlocked_coremask 00:12:07.488 ************************************ 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62520 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 62520 /var/tmp/spdk.sock 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 62520 ']' 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:07.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:07.488 09:52:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:07.488 [2024-05-15 09:52:44.702487] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:07.488 [2024-05-15 09:52:44.702657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62520 ] 00:12:07.488 [2024-05-15 09:52:44.841454] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:07.488 [2024-05-15 09:52:44.841519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.747 [2024-05-15 09:52:45.004706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62548 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 62548 /var/tmp/spdk2.sock 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 62548 ']' 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:08.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:08.680 09:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:08.680 [2024-05-15 09:52:45.794399] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:08.680 [2024-05-15 09:52:45.794540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62548 ] 00:12:08.680 [2024-05-15 09:52:45.946478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.938 [2024-05-15 09:52:46.278447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.874 09:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:09.874 09:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:12:09.874 09:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 62548 00:12:09.874 09:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62548 00:12:09.874 09:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 62520 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 62520 ']' 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 62520 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62520 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:10.807 killing process with pid 62520 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62520' 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 62520 00:12:10.807 09:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 62520 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 62548 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 62548 ']' 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 62548 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62548 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:12.177 killing process with pid 62548 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62548' 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 62548 00:12:12.177 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 62548 00:12:12.745 00:12:12.745 real 0m5.197s 00:12:12.745 user 0m5.482s 00:12:12.745 sys 0m1.486s 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 ************************************ 00:12:12.745 END TEST locking_app_on_unlocked_coremask 00:12:12.745 ************************************ 00:12:12.745 09:52:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:12.745 09:52:49 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:12.745 09:52:49 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:12.745 09:52:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 ************************************ 00:12:12.745 START TEST locking_app_on_locked_coremask 00:12:12.745 ************************************ 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62639 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 62639 /var/tmp/spdk.sock 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 62639 ']' 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:12.745 09:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 [2024-05-15 09:52:49.940059] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:12.745 [2024-05-15 09:52:49.940189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62639 ] 00:12:12.745 [2024-05-15 09:52:50.076113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.003 [2024-05-15 09:52:50.241035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62667 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62667 /var/tmp/spdk2.sock 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 62667 /var/tmp/spdk2.sock 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 62667 /var/tmp/spdk2.sock 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 62667 ']' 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:13.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:13.937 09:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:13.937 [2024-05-15 09:52:51.062925] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:13.938 [2024-05-15 09:52:51.063044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62667 ] 00:12:13.938 [2024-05-15 09:52:51.213311] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62639 has claimed it. 00:12:13.938 [2024-05-15 09:52:51.213420] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:14.502 ERROR: process (pid: 62667) is no longer running 00:12:14.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (62667) - No such process 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 62639 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62639 00:12:14.502 09:52:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 62639 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 62639 ']' 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 62639 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62639 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:15.069 killing process with pid 62639 00:12:15.069 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62639' 00:12:15.070 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 62639 00:12:15.070 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 62639 00:12:15.635 00:12:15.635 real 0m3.015s 00:12:15.635 user 0m3.366s 00:12:15.635 sys 0m0.820s 00:12:15.635 ************************************ 00:12:15.635 END TEST locking_app_on_locked_coremask 00:12:15.635 ************************************ 00:12:15.635 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:15.635 09:52:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:15.635 09:52:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:15.635 09:52:52 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:15.635 09:52:52 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:15.635 09:52:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:15.635 ************************************ 00:12:15.635 START TEST locking_overlapped_coremask 00:12:15.635 ************************************ 00:12:15.635 09:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:12:15.635 09:52:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62724 00:12:15.635 09:52:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 62724 /var/tmp/spdk.sock 00:12:15.635 09:52:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:15.635 09:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 62724 ']' 00:12:15.635 09:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.635 09:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:15.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.636 09:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.636 09:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:15.636 09:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:15.636 [2024-05-15 09:52:53.011755] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:15.636 [2024-05-15 09:52:53.011873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62724 ] 00:12:15.896 [2024-05-15 09:52:53.147271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:16.171 [2024-05-15 09:52:53.310291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.171 [2024-05-15 09:52:53.310428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.171 [2024-05-15 09:52:53.310439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62754 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62754 /var/tmp/spdk2.sock 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 62754 /var/tmp/spdk2.sock 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 62754 /var/tmp/spdk2.sock 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 62754 ']' 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:16.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:16.736 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:16.994 [2024-05-15 09:52:54.160756] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:16.994 [2024-05-15 09:52:54.160896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62754 ] 00:12:16.994 [2024-05-15 09:52:54.312666] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62724 has claimed it. 00:12:16.994 [2024-05-15 09:52:54.312755] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:17.558 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 843: kill: (62754) - No such process 00:12:17.558 ERROR: process (pid: 62754) is no longer running 00:12:17.558 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:17.558 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 62724 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 62724 ']' 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 62724 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62724 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:17.559 killing process with pid 62724 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62724' 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 62724 00:12:17.559 09:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 62724 00:12:18.491 00:12:18.491 real 0m2.625s 00:12:18.491 user 0m6.981s 00:12:18.491 sys 0m0.643s 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:18.491 ************************************ 00:12:18.491 END TEST locking_overlapped_coremask 00:12:18.491 ************************************ 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:18.491 09:52:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:18.491 09:52:55 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:18.491 09:52:55 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:18.491 09:52:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:18.491 ************************************ 00:12:18.491 START TEST locking_overlapped_coremask_via_rpc 00:12:18.491 ************************************ 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62807 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 62807 /var/tmp/spdk.sock 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 62807 ']' 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:18.491 09:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.491 [2024-05-15 09:52:55.680938] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:18.491 [2024-05-15 09:52:55.681046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62807 ] 00:12:18.491 [2024-05-15 09:52:55.818079] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:18.491 [2024-05-15 09:52:55.818153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:18.750 [2024-05-15 09:52:55.994814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.750 [2024-05-15 09:52:55.994958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.750 [2024-05-15 09:52:55.994965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62837 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 62837 /var/tmp/spdk2.sock 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 62837 ']' 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:19.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:19.341 09:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.686 [2024-05-15 09:52:56.755314] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:19.686 [2024-05-15 09:52:56.755434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62837 ] 00:12:19.686 [2024-05-15 09:52:56.906261] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:19.686 [2024-05-15 09:52:56.906317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:19.944 [2024-05-15 09:52:57.221713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.944 [2024-05-15 09:52:57.234170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.944 [2024-05-15 09:52:57.234170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:20.878 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:20.878 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.879 [2024-05-15 09:52:57.947257] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62807 has claimed it. 00:12:20.879 2024/05/15 09:52:57 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:12:20.879 request: 00:12:20.879 { 00:12:20.879 "method": "framework_enable_cpumask_locks", 00:12:20.879 "params": {} 00:12:20.879 } 00:12:20.879 Got JSON-RPC error response 00:12:20.879 GoRPCClient: error on JSON-RPC call 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 62807 /var/tmp/spdk.sock 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 62807 ']' 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:20.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:20.879 09:52:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 62837 /var/tmp/spdk2.sock 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 62837 ']' 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:20.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:20.879 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.444 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:21.444 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:12:21.444 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:21.444 ************************************ 00:12:21.444 END TEST locking_overlapped_coremask_via_rpc 00:12:21.444 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:21.444 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:21.444 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:21.444 00:12:21.444 real 0m2.949s 00:12:21.444 user 0m1.428s 00:12:21.444 sys 0m0.309s 00:12:21.444 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:21.444 09:52:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.444 ************************************ 00:12:21.444 09:52:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:21.444 09:52:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62807 ]] 00:12:21.444 09:52:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62807 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 62807 ']' 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 62807 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62807 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:21.444 killing process with pid 62807 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62807' 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 62807 00:12:21.444 09:52:58 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 62807 00:12:22.009 09:52:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62837 ]] 00:12:22.009 09:52:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62837 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 62837 ']' 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 62837 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 62837 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:12:22.009 killing process with pid 62837 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 62837' 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 62837 00:12:22.009 09:52:59 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 62837 00:12:22.955 09:52:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:22.955 09:52:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:22.955 09:52:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 62807 ]] 00:12:22.956 09:52:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 62807 00:12:22.956 09:52:59 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 62807 ']' 00:12:22.956 09:52:59 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 62807 00:12:22.956 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (62807) - No such process 00:12:22.956 Process with pid 62807 is not found 00:12:22.956 09:52:59 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 62807 is not found' 00:12:22.956 09:52:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 62837 ]] 00:12:22.956 09:52:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 62837 00:12:22.956 09:52:59 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 62837 ']' 00:12:22.956 09:52:59 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 62837 00:12:22.956 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (62837) - No such process 00:12:22.956 Process with pid 62837 is not found 00:12:22.956 09:52:59 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 62837 is not found' 00:12:22.956 09:52:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:22.956 00:12:22.956 real 0m25.436s 00:12:22.956 user 0m42.535s 00:12:22.956 sys 0m7.447s 00:12:22.956 09:52:59 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:22.956 09:52:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:22.956 ************************************ 00:12:22.956 END TEST cpu_locks 00:12:22.956 ************************************ 00:12:22.956 00:12:22.956 real 0m57.638s 00:12:22.956 user 1m48.906s 00:12:22.956 sys 0m12.776s 00:12:22.956 09:53:00 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:22.956 09:53:00 event -- common/autotest_common.sh@10 -- # set +x 00:12:22.956 ************************************ 00:12:22.956 END TEST event 00:12:22.956 ************************************ 00:12:22.956 09:53:00 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:22.956 09:53:00 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:22.956 09:53:00 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:22.956 09:53:00 -- common/autotest_common.sh@10 -- # set +x 00:12:22.956 ************************************ 00:12:22.956 START TEST thread 00:12:22.956 ************************************ 00:12:22.956 09:53:00 thread -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:22.956 * Looking for test storage... 00:12:22.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:22.956 09:53:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:22.956 09:53:00 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:12:22.956 09:53:00 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:22.956 09:53:00 thread -- common/autotest_common.sh@10 -- # set +x 00:12:22.956 ************************************ 00:12:22.956 START TEST thread_poller_perf 00:12:22.956 ************************************ 00:12:22.956 09:53:00 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:22.956 [2024-05-15 09:53:00.210178] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:22.956 [2024-05-15 09:53:00.210901] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62994 ] 00:12:23.240 [2024-05-15 09:53:00.352301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.240 [2024-05-15 09:53:00.530511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.240 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:24.616 ====================================== 00:12:24.616 busy:2107673700 (cyc) 00:12:24.616 total_run_count: 348000 00:12:24.616 tsc_hz: 2100000000 (cyc) 00:12:24.616 ====================================== 00:12:24.616 poller_cost: 6056 (cyc), 2883 (nsec) 00:12:24.616 00:12:24.616 real 0m1.513s 00:12:24.616 user 0m1.321s 00:12:24.616 sys 0m0.083s 00:12:24.616 09:53:01 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:24.616 09:53:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:24.616 ************************************ 00:12:24.616 END TEST thread_poller_perf 00:12:24.616 ************************************ 00:12:24.616 09:53:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:24.616 09:53:01 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:12:24.616 09:53:01 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:24.616 09:53:01 thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.616 ************************************ 00:12:24.616 START TEST thread_poller_perf 00:12:24.616 ************************************ 00:12:24.616 09:53:01 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:24.616 [2024-05-15 09:53:01.782143] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:24.616 [2024-05-15 09:53:01.782899] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63030 ] 00:12:24.616 [2024-05-15 09:53:01.928569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.874 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:24.874 [2024-05-15 09:53:02.089134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.250 ====================================== 00:12:26.250 busy:2102272024 (cyc) 00:12:26.250 total_run_count: 4492000 00:12:26.250 tsc_hz: 2100000000 (cyc) 00:12:26.250 ====================================== 00:12:26.250 poller_cost: 468 (cyc), 222 (nsec) 00:12:26.250 ************************************ 00:12:26.250 END TEST thread_poller_perf 00:12:26.250 ************************************ 00:12:26.250 00:12:26.250 real 0m1.501s 00:12:26.250 user 0m1.301s 00:12:26.250 sys 0m0.088s 00:12:26.250 09:53:03 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:26.250 09:53:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:26.250 09:53:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:26.250 ************************************ 00:12:26.250 END TEST thread 00:12:26.250 ************************************ 00:12:26.250 00:12:26.250 real 0m3.211s 00:12:26.250 user 0m2.681s 00:12:26.250 sys 0m0.315s 00:12:26.250 09:53:03 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:26.250 09:53:03 thread -- common/autotest_common.sh@10 -- # set +x 00:12:26.250 09:53:03 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:26.250 09:53:03 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:12:26.250 09:53:03 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:26.250 09:53:03 -- common/autotest_common.sh@10 -- # set +x 00:12:26.250 ************************************ 00:12:26.250 START TEST accel 00:12:26.250 ************************************ 00:12:26.250 09:53:03 accel -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:26.250 * Looking for test storage... 00:12:26.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:26.250 09:53:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:26.250 09:53:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:12:26.250 09:53:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:26.250 09:53:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63104 00:12:26.250 09:53:03 accel -- accel/accel.sh@63 -- # waitforlisten 63104 00:12:26.250 09:53:03 accel -- common/autotest_common.sh@828 -- # '[' -z 63104 ']' 00:12:26.250 09:53:03 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.250 09:53:03 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:26.250 09:53:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:12:26.250 09:53:03 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:26.250 09:53:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:26.250 09:53:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:26.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.250 09:53:03 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.250 09:53:03 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:26.250 09:53:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:26.250 09:53:03 accel -- common/autotest_common.sh@10 -- # set +x 00:12:26.250 09:53:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:26.250 09:53:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:26.250 09:53:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:26.250 09:53:03 accel -- accel/accel.sh@41 -- # jq -r . 00:12:26.250 [2024-05-15 09:53:03.519787] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:26.250 [2024-05-15 09:53:03.520203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63104 ] 00:12:26.509 [2024-05-15 09:53:03.668222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.509 [2024-05-15 09:53:03.841645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.443 09:53:04 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:27.443 09:53:04 accel -- common/autotest_common.sh@861 -- # return 0 00:12:27.443 09:53:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:12:27.443 09:53:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:12:27.443 09:53:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:12:27.443 09:53:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:12:27.443 09:53:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:12:27.443 09:53:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:12:27.443 09:53:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:12:27.443 09:53:04 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:27.443 09:53:04 accel -- common/autotest_common.sh@10 -- # set +x 00:12:27.443 09:53:04 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.443 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.443 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.443 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.444 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.444 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.444 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.444 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.444 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.444 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.444 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.444 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.444 09:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:12:27.444 09:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:12:27.444 09:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:12:27.444 09:53:04 accel -- accel/accel.sh@75 -- # killprocess 63104 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@947 -- # '[' -z 63104 ']' 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@951 -- # kill -0 63104 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@952 -- # uname 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 63104 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 63104' 00:12:27.444 killing process with pid 63104 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@966 -- # kill 63104 00:12:27.444 09:53:04 accel -- common/autotest_common.sh@971 -- # wait 63104 00:12:28.010 09:53:05 accel -- accel/accel.sh@76 -- # trap - ERR 00:12:28.010 09:53:05 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:12:28.010 09:53:05 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:28.010 09:53:05 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:28.010 09:53:05 accel -- common/autotest_common.sh@10 -- # set +x 00:12:28.010 09:53:05 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:12:28.010 09:53:05 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:12:28.010 09:53:05 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:28.010 09:53:05 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:12:28.268 09:53:05 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:12:28.268 09:53:05 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:12:28.268 09:53:05 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:28.268 09:53:05 accel -- common/autotest_common.sh@10 -- # set +x 00:12:28.268 ************************************ 00:12:28.268 START TEST accel_missing_filename 00:12:28.268 ************************************ 00:12:28.268 09:53:05 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:12:28.268 09:53:05 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:12:28.268 09:53:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:12:28.268 09:53:05 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:28.268 09:53:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:28.268 09:53:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:28.268 09:53:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:28.268 09:53:05 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:12:28.269 09:53:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:12:28.269 [2024-05-15 09:53:05.441406] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:28.269 [2024-05-15 09:53:05.441776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63180 ] 00:12:28.269 [2024-05-15 09:53:05.592930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.527 [2024-05-15 09:53:05.766849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.527 [2024-05-15 09:53:05.851367] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:28.785 [2024-05-15 09:53:05.968403] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:12:28.785 A filename is required. 00:12:28.785 09:53:06 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:12:28.785 09:53:06 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:28.785 09:53:06 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:12:28.785 09:53:06 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:12:28.785 09:53:06 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:12:28.785 09:53:06 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:28.785 00:12:28.785 real 0m0.731s 00:12:28.785 user 0m0.493s 00:12:28.785 sys 0m0.169s 00:12:28.785 09:53:06 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:28.785 09:53:06 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:12:28.785 ************************************ 00:12:28.785 END TEST accel_missing_filename 00:12:28.785 ************************************ 00:12:29.043 09:53:06 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:29.043 09:53:06 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:12:29.043 09:53:06 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:29.043 09:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.043 ************************************ 00:12:29.043 START TEST accel_compress_verify 00:12:29.043 ************************************ 00:12:29.043 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:29.043 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:12:29.043 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:29.043 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:29.043 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:29.043 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:29.043 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:29.043 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:29.043 09:53:06 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:12:29.043 [2024-05-15 09:53:06.227402] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:29.043 [2024-05-15 09:53:06.228320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63205 ] 00:12:29.043 [2024-05-15 09:53:06.372551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.301 [2024-05-15 09:53:06.543508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.301 [2024-05-15 09:53:06.625512] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:29.560 [2024-05-15 09:53:06.740466] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:12:29.560 00:12:29.560 Compression does not support the verify option, aborting. 00:12:29.560 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:12:29.560 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:29.560 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:12:29.560 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:12:29.560 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:12:29.560 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:29.560 00:12:29.560 real 0m0.718s 00:12:29.560 user 0m0.471s 00:12:29.560 sys 0m0.170s 00:12:29.560 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:29.560 09:53:06 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:12:29.560 ************************************ 00:12:29.560 END TEST accel_compress_verify 00:12:29.560 ************************************ 00:12:29.818 09:53:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:12:29.818 09:53:06 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:12:29.818 09:53:06 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:29.818 09:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.818 ************************************ 00:12:29.818 START TEST accel_wrong_workload 00:12:29.818 ************************************ 00:12:29.818 09:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:12:29.818 09:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:12:29.818 09:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:12:29.818 09:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:29.818 09:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:29.818 09:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:29.818 09:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:29.818 09:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:12:29.818 09:53:06 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:12:29.818 Unsupported workload type: foobar 00:12:29.818 [2024-05-15 09:53:07.004142] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:12:29.818 accel_perf options: 00:12:29.818 [-h help message] 00:12:29.818 [-q queue depth per core] 00:12:29.818 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:29.818 [-T number of threads per core 00:12:29.818 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:29.818 [-t time in seconds] 00:12:29.818 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:29.818 [ dif_verify, , dif_generate, dif_generate_copy 00:12:29.818 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:29.818 [-l for compress/decompress workloads, name of uncompressed input file 00:12:29.818 [-S for crc32c workload, use this seed value (default 0) 00:12:29.818 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:29.818 [-f for fill workload, use this BYTE value (default 255) 00:12:29.819 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:29.819 [-y verify result if this switch is on] 00:12:29.819 [-a tasks to allocate per core (default: same value as -q)] 00:12:29.819 Can be used to spread operations across a wider range of memory. 00:12:29.819 09:53:07 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:12:29.819 09:53:07 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:29.819 09:53:07 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:29.819 09:53:07 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:29.819 00:12:29.819 real 0m0.036s 00:12:29.819 user 0m0.014s 00:12:29.819 sys 0m0.020s 00:12:29.819 09:53:07 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:29.819 09:53:07 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:12:29.819 ************************************ 00:12:29.819 END TEST accel_wrong_workload 00:12:29.819 ************************************ 00:12:29.819 09:53:07 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:12:29.819 09:53:07 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:12:29.819 09:53:07 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:29.819 09:53:07 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.819 ************************************ 00:12:29.819 START TEST accel_negative_buffers 00:12:29.819 ************************************ 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:12:29.819 09:53:07 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:12:29.819 -x option must be non-negative. 00:12:29.819 [2024-05-15 09:53:07.093668] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:12:29.819 accel_perf options: 00:12:29.819 [-h help message] 00:12:29.819 [-q queue depth per core] 00:12:29.819 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:12:29.819 [-T number of threads per core 00:12:29.819 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:12:29.819 [-t time in seconds] 00:12:29.819 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:12:29.819 [ dif_verify, , dif_generate, dif_generate_copy 00:12:29.819 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:12:29.819 [-l for compress/decompress workloads, name of uncompressed input file 00:12:29.819 [-S for crc32c workload, use this seed value (default 0) 00:12:29.819 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:12:29.819 [-f for fill workload, use this BYTE value (default 255) 00:12:29.819 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:12:29.819 [-y verify result if this switch is on] 00:12:29.819 [-a tasks to allocate per core (default: same value as -q)] 00:12:29.819 Can be used to spread operations across a wider range of memory. 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:29.819 00:12:29.819 real 0m0.038s 00:12:29.819 user 0m0.023s 00:12:29.819 sys 0m0.012s 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:29.819 09:53:07 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:12:29.819 ************************************ 00:12:29.819 END TEST accel_negative_buffers 00:12:29.819 ************************************ 00:12:29.819 09:53:07 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:12:29.819 09:53:07 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:12:29.819 09:53:07 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:29.819 09:53:07 accel -- common/autotest_common.sh@10 -- # set +x 00:12:29.819 ************************************ 00:12:29.819 START TEST accel_crc32c 00:12:29.819 ************************************ 00:12:29.819 09:53:07 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:29.819 09:53:07 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:29.819 [2024-05-15 09:53:07.190479] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:29.819 [2024-05-15 09:53:07.190852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:12:30.077 [2024-05-15 09:53:07.344953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.336 [2024-05-15 09:53:07.505456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:30.336 09:53:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:31.710 ************************************ 00:12:31.710 END TEST accel_crc32c 00:12:31.710 ************************************ 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:31.710 09:53:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:31.710 00:12:31.710 real 0m1.717s 00:12:31.710 user 0m1.443s 00:12:31.710 sys 0m0.178s 00:12:31.710 09:53:08 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:31.710 09:53:08 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:31.710 09:53:08 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:12:31.710 09:53:08 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:12:31.710 09:53:08 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:31.710 09:53:08 accel -- common/autotest_common.sh@10 -- # set +x 00:12:31.710 ************************************ 00:12:31.710 START TEST accel_crc32c_C2 00:12:31.710 ************************************ 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:31.710 09:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:31.710 [2024-05-15 09:53:08.965887] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:31.710 [2024-05-15 09:53:08.966274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:12:31.968 [2024-05-15 09:53:09.115951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.968 [2024-05-15 09:53:09.286640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:32.226 09:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:33.598 00:12:33.598 real 0m1.723s 00:12:33.598 user 0m1.457s 00:12:33.598 sys 0m0.169s 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:33.598 09:53:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:33.598 ************************************ 00:12:33.598 END TEST accel_crc32c_C2 00:12:33.598 ************************************ 00:12:33.598 09:53:10 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:12:33.598 09:53:10 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:12:33.598 09:53:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:33.598 09:53:10 accel -- common/autotest_common.sh@10 -- # set +x 00:12:33.598 ************************************ 00:12:33.598 START TEST accel_copy 00:12:33.598 ************************************ 00:12:33.599 09:53:10 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:33.599 09:53:10 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:12:33.599 [2024-05-15 09:53:10.739066] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:33.599 [2024-05-15 09:53:10.739510] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63343 ] 00:12:33.599 [2024-05-15 09:53:10.914805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.856 [2024-05-15 09:53:11.085453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.856 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:33.856 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.856 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.856 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.856 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:33.856 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:33.857 09:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:12:35.229 09:53:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:35.229 00:12:35.229 real 0m1.749s 00:12:35.229 user 0m1.485s 00:12:35.229 sys 0m0.167s 00:12:35.229 09:53:12 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:35.229 ************************************ 00:12:35.229 END TEST accel_copy 00:12:35.229 ************************************ 00:12:35.229 09:53:12 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:12:35.229 09:53:12 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:35.229 09:53:12 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:12:35.229 09:53:12 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:35.229 09:53:12 accel -- common/autotest_common.sh@10 -- # set +x 00:12:35.229 ************************************ 00:12:35.229 START TEST accel_fill 00:12:35.229 ************************************ 00:12:35.229 09:53:12 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:12:35.229 09:53:12 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:12:35.229 [2024-05-15 09:53:12.536688] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:35.229 [2024-05-15 09:53:12.537038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63378 ] 00:12:35.487 [2024-05-15 09:53:12.689559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.487 [2024-05-15 09:53:12.851479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.745 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:35.746 09:53:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:37.118 09:53:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:37.118 09:53:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:37.118 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:37.118 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:37.118 09:53:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:12:37.119 09:53:14 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:37.119 00:12:37.119 real 0m1.724s 00:12:37.119 user 0m1.452s 00:12:37.119 sys 0m0.170s 00:12:37.119 09:53:14 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:37.119 09:53:14 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:12:37.119 ************************************ 00:12:37.119 END TEST accel_fill 00:12:37.119 ************************************ 00:12:37.119 09:53:14 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:12:37.119 09:53:14 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:12:37.119 09:53:14 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:37.119 09:53:14 accel -- common/autotest_common.sh@10 -- # set +x 00:12:37.119 ************************************ 00:12:37.119 START TEST accel_copy_crc32c 00:12:37.119 ************************************ 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:12:37.119 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:12:37.119 [2024-05-15 09:53:14.315232] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:37.119 [2024-05-15 09:53:14.315688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63418 ] 00:12:37.119 [2024-05-15 09:53:14.454712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.377 [2024-05-15 09:53:14.629687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:37.377 09:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:38.750 ************************************ 00:12:38.750 END TEST accel_copy_crc32c 00:12:38.750 ************************************ 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:38.750 00:12:38.750 real 0m1.722s 00:12:38.750 user 0m1.451s 00:12:38.750 sys 0m0.173s 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:38.750 09:53:16 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:12:38.750 09:53:16 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:12:38.750 09:53:16 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:12:38.750 09:53:16 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:38.750 09:53:16 accel -- common/autotest_common.sh@10 -- # set +x 00:12:38.750 ************************************ 00:12:38.750 START TEST accel_copy_crc32c_C2 00:12:38.750 ************************************ 00:12:38.750 09:53:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:12:38.751 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:12:38.751 [2024-05-15 09:53:16.096582] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:38.751 [2024-05-15 09:53:16.096872] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63452 ] 00:12:39.009 [2024-05-15 09:53:16.244953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.268 [2024-05-15 09:53:16.421432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.268 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:39.269 09:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:40.642 00:12:40.642 real 0m1.739s 00:12:40.642 user 0m1.472s 00:12:40.642 sys 0m0.175s 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:40.642 09:53:17 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:12:40.642 ************************************ 00:12:40.642 END TEST accel_copy_crc32c_C2 00:12:40.642 ************************************ 00:12:40.642 09:53:17 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:12:40.642 09:53:17 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:12:40.642 09:53:17 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:40.642 09:53:17 accel -- common/autotest_common.sh@10 -- # set +x 00:12:40.642 ************************************ 00:12:40.642 START TEST accel_dualcast 00:12:40.642 ************************************ 00:12:40.642 09:53:17 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:12:40.642 09:53:17 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:12:40.642 [2024-05-15 09:53:17.898326] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:40.642 [2024-05-15 09:53:17.898699] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63487 ] 00:12:40.902 [2024-05-15 09:53:18.046120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.902 [2024-05-15 09:53:18.234514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.161 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.162 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:41.162 09:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:41.162 09:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:41.162 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:41.162 09:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:12:42.533 ************************************ 00:12:42.533 END TEST accel_dualcast 00:12:42.533 ************************************ 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:12:42.533 09:53:19 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:42.533 00:12:42.533 real 0m1.766s 00:12:42.533 user 0m1.483s 00:12:42.533 sys 0m0.181s 00:12:42.533 09:53:19 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:42.533 09:53:19 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:12:42.533 09:53:19 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:12:42.533 09:53:19 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:12:42.533 09:53:19 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:42.533 09:53:19 accel -- common/autotest_common.sh@10 -- # set +x 00:12:42.533 ************************************ 00:12:42.533 START TEST accel_compare 00:12:42.533 ************************************ 00:12:42.533 09:53:19 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:12:42.533 09:53:19 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:12:42.533 [2024-05-15 09:53:19.715874] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:42.533 [2024-05-15 09:53:19.716230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63527 ] 00:12:42.533 [2024-05-15 09:53:19.862975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.791 [2024-05-15 09:53:20.041555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.791 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:42.792 09:53:20 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:12:44.167 09:53:21 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:44.167 00:12:44.167 real 0m1.730s 00:12:44.167 user 0m1.452s 00:12:44.167 sys 0m0.178s 00:12:44.167 09:53:21 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:44.167 09:53:21 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:12:44.167 ************************************ 00:12:44.167 END TEST accel_compare 00:12:44.167 ************************************ 00:12:44.167 09:53:21 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:12:44.167 09:53:21 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:12:44.167 09:53:21 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:44.167 09:53:21 accel -- common/autotest_common.sh@10 -- # set +x 00:12:44.167 ************************************ 00:12:44.167 START TEST accel_xor 00:12:44.167 ************************************ 00:12:44.167 09:53:21 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:44.167 09:53:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:44.167 [2024-05-15 09:53:21.505403] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:44.167 [2024-05-15 09:53:21.505507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63567 ] 00:12:44.425 [2024-05-15 09:53:21.650652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.684 [2024-05-15 09:53:21.808743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:44.684 09:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:46.058 00:12:46.058 real 0m1.700s 00:12:46.058 user 0m1.435s 00:12:46.058 sys 0m0.172s 00:12:46.058 09:53:23 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:46.058 09:53:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:46.058 ************************************ 00:12:46.058 END TEST accel_xor 00:12:46.058 ************************************ 00:12:46.058 09:53:23 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:12:46.058 09:53:23 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:12:46.058 09:53:23 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:46.058 09:53:23 accel -- common/autotest_common.sh@10 -- # set +x 00:12:46.058 ************************************ 00:12:46.058 START TEST accel_xor 00:12:46.058 ************************************ 00:12:46.058 09:53:23 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:12:46.058 09:53:23 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:12:46.058 [2024-05-15 09:53:23.256306] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:46.058 [2024-05-15 09:53:23.257041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63596 ] 00:12:46.058 [2024-05-15 09:53:23.400863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.315 [2024-05-15 09:53:23.585059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.315 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.315 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.315 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.315 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.315 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:46.316 09:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:12:47.691 09:53:24 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:47.691 00:12:47.691 real 0m1.738s 00:12:47.691 user 0m1.456s 00:12:47.691 sys 0m0.184s 00:12:47.691 09:53:24 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:47.691 09:53:24 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:12:47.691 ************************************ 00:12:47.691 END TEST accel_xor 00:12:47.691 ************************************ 00:12:47.691 09:53:25 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:12:47.691 09:53:25 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:12:47.691 09:53:25 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:47.691 09:53:25 accel -- common/autotest_common.sh@10 -- # set +x 00:12:47.691 ************************************ 00:12:47.691 START TEST accel_dif_verify 00:12:47.691 ************************************ 00:12:47.691 09:53:25 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:12:47.691 09:53:25 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:12:47.691 [2024-05-15 09:53:25.051932] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:47.691 [2024-05-15 09:53:25.052143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63636 ] 00:12:47.949 [2024-05-15 09:53:25.189008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.208 [2024-05-15 09:53:25.354258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:48.208 09:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:49.582 09:53:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:49.582 09:53:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:49.582 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:12:49.583 09:53:26 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:49.583 00:12:49.583 real 0m1.702s 00:12:49.583 user 0m1.437s 00:12:49.583 sys 0m0.171s 00:12:49.583 09:53:26 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:49.583 09:53:26 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:12:49.583 ************************************ 00:12:49.583 END TEST accel_dif_verify 00:12:49.583 ************************************ 00:12:49.583 09:53:26 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:12:49.583 09:53:26 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:12:49.583 09:53:26 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:49.583 09:53:26 accel -- common/autotest_common.sh@10 -- # set +x 00:12:49.583 ************************************ 00:12:49.583 START TEST accel_dif_generate 00:12:49.583 ************************************ 00:12:49.583 09:53:26 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:12:49.583 09:53:26 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:12:49.583 [2024-05-15 09:53:26.826571] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:49.583 [2024-05-15 09:53:26.826881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63676 ] 00:12:49.840 [2024-05-15 09:53:26.974008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.840 [2024-05-15 09:53:27.134066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:49.840 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:49.841 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:50.099 09:53:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:12:51.485 09:53:28 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:51.485 00:12:51.485 real 0m1.711s 00:12:51.485 user 0m1.448s 00:12:51.485 sys 0m0.174s 00:12:51.485 09:53:28 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:51.485 09:53:28 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:12:51.485 ************************************ 00:12:51.485 END TEST accel_dif_generate 00:12:51.485 ************************************ 00:12:51.485 09:53:28 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:12:51.485 09:53:28 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:12:51.485 09:53:28 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:51.485 09:53:28 accel -- common/autotest_common.sh@10 -- # set +x 00:12:51.485 ************************************ 00:12:51.485 START TEST accel_dif_generate_copy 00:12:51.485 ************************************ 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:12:51.485 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:12:51.485 [2024-05-15 09:53:28.591424] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:51.485 [2024-05-15 09:53:28.591667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63711 ] 00:12:51.485 [2024-05-15 09:53:28.724340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.745 [2024-05-15 09:53:28.879978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:51.745 09:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:53.123 00:12:53.123 real 0m1.685s 00:12:53.123 user 0m0.015s 00:12:53.123 sys 0m0.002s 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:53.123 09:53:30 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.123 ************************************ 00:12:53.123 END TEST accel_dif_generate_copy 00:12:53.123 ************************************ 00:12:53.123 09:53:30 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:12:53.123 09:53:30 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.123 09:53:30 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:12:53.123 09:53:30 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:53.123 09:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:12:53.123 ************************************ 00:12:53.123 START TEST accel_comp 00:12:53.123 ************************************ 00:12:53.123 09:53:30 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:12:53.123 09:53:30 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:12:53.123 [2024-05-15 09:53:30.335798] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:53.123 [2024-05-15 09:53:30.336111] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63745 ] 00:12:53.123 [2024-05-15 09:53:30.471937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.381 [2024-05-15 09:53:30.623857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:53.382 09:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:12:54.758 09:53:31 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:54.758 00:12:54.758 real 0m1.678s 00:12:54.758 user 0m1.423s 00:12:54.758 sys 0m0.159s 00:12:54.758 ************************************ 00:12:54.758 END TEST accel_comp 00:12:54.758 ************************************ 00:12:54.758 09:53:31 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:54.758 09:53:31 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:12:54.758 09:53:32 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:54.758 09:53:32 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:12:54.758 09:53:32 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:54.758 09:53:32 accel -- common/autotest_common.sh@10 -- # set +x 00:12:54.758 ************************************ 00:12:54.758 START TEST accel_decomp 00:12:54.758 ************************************ 00:12:54.758 09:53:32 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:12:54.758 09:53:32 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:12:54.758 [2024-05-15 09:53:32.072294] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:54.758 [2024-05-15 09:53:32.073445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63785 ] 00:12:55.017 [2024-05-15 09:53:32.214376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.017 [2024-05-15 09:53:32.375312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:55.274 09:53:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:56.649 09:53:33 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:56.649 00:12:56.649 real 0m1.715s 00:12:56.649 user 0m1.446s 00:12:56.649 sys 0m0.171s 00:12:56.649 09:53:33 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:56.649 09:53:33 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:56.649 ************************************ 00:12:56.649 END TEST accel_decomp 00:12:56.649 ************************************ 00:12:56.649 09:53:33 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:56.649 09:53:33 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:12:56.649 09:53:33 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:56.649 09:53:33 accel -- common/autotest_common.sh@10 -- # set +x 00:12:56.649 ************************************ 00:12:56.649 START TEST accel_decmop_full 00:12:56.649 ************************************ 00:12:56.649 09:53:33 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:12:56.649 09:53:33 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:12:56.649 [2024-05-15 09:53:33.850726] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:56.650 [2024-05-15 09:53:33.851613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63820 ] 00:12:56.650 [2024-05-15 09:53:33.989429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.910 [2024-05-15 09:53:34.152069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.910 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.911 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.912 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:56.912 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.912 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.912 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:56.912 09:53:34 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:56.912 09:53:34 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:56.912 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:56.912 09:53:34 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:12:58.288 ************************************ 00:12:58.288 END TEST accel_decmop_full 00:12:58.288 ************************************ 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:58.288 09:53:35 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:58.288 00:12:58.288 real 0m1.711s 00:12:58.288 user 0m0.019s 00:12:58.288 sys 0m0.002s 00:12:58.288 09:53:35 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:58.288 09:53:35 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:12:58.288 09:53:35 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:58.288 09:53:35 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:12:58.288 09:53:35 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:58.288 09:53:35 accel -- common/autotest_common.sh@10 -- # set +x 00:12:58.288 ************************************ 00:12:58.288 START TEST accel_decomp_mcore 00:12:58.288 ************************************ 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:58.288 09:53:35 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:58.288 [2024-05-15 09:53:35.614688] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:12:58.288 [2024-05-15 09:53:35.615025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63863 ] 00:12:58.546 [2024-05-15 09:53:35.763037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.804 [2024-05-15 09:53:35.929077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.804 [2024-05-15 09:53:35.929201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.804 [2024-05-15 09:53:35.929332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.804 [2024-05-15 09:53:35.929331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:58.804 09:53:36 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.176 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:00.177 00:13:00.177 real 0m1.746s 00:13:00.177 user 0m0.017s 00:13:00.177 sys 0m0.003s 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:00.177 09:53:37 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:00.177 ************************************ 00:13:00.177 END TEST accel_decomp_mcore 00:13:00.177 ************************************ 00:13:00.177 09:53:37 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:00.177 09:53:37 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:13:00.177 09:53:37 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:00.177 09:53:37 accel -- common/autotest_common.sh@10 -- # set +x 00:13:00.177 ************************************ 00:13:00.177 START TEST accel_decomp_full_mcore 00:13:00.177 ************************************ 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:00.177 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:00.177 [2024-05-15 09:53:37.422760] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:00.177 [2024-05-15 09:53:37.423237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63900 ] 00:13:00.435 [2024-05-15 09:53:37.570238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.435 [2024-05-15 09:53:37.740397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.435 [2024-05-15 09:53:37.740461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.435 [2024-05-15 09:53:37.740562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.435 [2024-05-15 09:53:37.740563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:00.693 09:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:02.065 00:13:02.065 real 0m1.775s 00:13:02.065 user 0m5.162s 00:13:02.065 sys 0m0.202s 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:02.065 09:53:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:02.065 ************************************ 00:13:02.065 END TEST accel_decomp_full_mcore 00:13:02.065 ************************************ 00:13:02.065 09:53:39 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:02.065 09:53:39 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:13:02.065 09:53:39 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:02.065 09:53:39 accel -- common/autotest_common.sh@10 -- # set +x 00:13:02.065 ************************************ 00:13:02.065 START TEST accel_decomp_mthread 00:13:02.065 ************************************ 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:02.065 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:02.065 [2024-05-15 09:53:39.250193] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:02.065 [2024-05-15 09:53:39.251044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63938 ] 00:13:02.065 [2024-05-15 09:53:39.397251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.324 [2024-05-15 09:53:39.562774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:02.324 09:53:39 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:03.698 00:13:03.698 real 0m1.730s 00:13:03.698 user 0m1.442s 00:13:03.698 sys 0m0.190s 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:03.698 09:53:40 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:03.698 ************************************ 00:13:03.698 END TEST accel_decomp_mthread 00:13:03.698 ************************************ 00:13:03.698 09:53:41 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:03.698 09:53:41 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:13:03.698 09:53:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:03.698 09:53:41 accel -- common/autotest_common.sh@10 -- # set +x 00:13:03.698 ************************************ 00:13:03.698 START TEST accel_decomp_full_mthread 00:13:03.698 ************************************ 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:03.699 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:03.699 [2024-05-15 09:53:41.043886] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:03.699 [2024-05-15 09:53:41.044968] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63978 ] 00:13:03.957 [2024-05-15 09:53:41.192434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.215 [2024-05-15 09:53:41.363785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:04.215 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:04.216 09:53:41 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:05.631 00:13:05.631 real 0m1.778s 00:13:05.631 user 0m1.495s 00:13:05.631 sys 0m0.177s 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:05.631 09:53:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:05.631 ************************************ 00:13:05.631 END TEST accel_decomp_full_mthread 00:13:05.631 ************************************ 00:13:05.631 09:53:42 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:05.631 09:53:42 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:05.631 09:53:42 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:05.631 09:53:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:05.631 09:53:42 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:05.631 09:53:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:05.631 09:53:42 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:05.631 09:53:42 accel -- common/autotest_common.sh@10 -- # set +x 00:13:05.631 09:53:42 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:05.631 09:53:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:05.631 09:53:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:05.631 09:53:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:05.631 09:53:42 accel -- accel/accel.sh@41 -- # jq -r . 00:13:05.631 ************************************ 00:13:05.631 START TEST accel_dif_functional_tests 00:13:05.632 ************************************ 00:13:05.632 09:53:42 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:05.632 [2024-05-15 09:53:42.911960] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:05.632 [2024-05-15 09:53:42.912888] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64019 ] 00:13:05.892 [2024-05-15 09:53:43.059085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.892 [2024-05-15 09:53:43.235876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.892 [2024-05-15 09:53:43.236035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.892 [2024-05-15 09:53:43.236040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.151 00:13:06.151 00:13:06.151 CUnit - A unit testing framework for C - Version 2.1-3 00:13:06.151 http://cunit.sourceforge.net/ 00:13:06.151 00:13:06.151 00:13:06.151 Suite: accel_dif 00:13:06.151 Test: verify: DIF generated, GUARD check ...passed 00:13:06.151 Test: verify: DIF generated, APPTAG check ...passed 00:13:06.151 Test: verify: DIF generated, REFTAG check ...passed 00:13:06.151 Test: verify: DIF not generated, GUARD check ...[2024-05-15 09:53:43.382177] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:06.151 [2024-05-15 09:53:43.382809] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:06.151 passed 00:13:06.151 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 09:53:43.383066] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:06.151 [2024-05-15 09:53:43.383392] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:06.151 passed 00:13:06.151 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 09:53:43.383597] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:06.151 [2024-05-15 09:53:43.383937] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed 00:13:06.151 Test: verify: APPTAG correct, APPTAG check ...5a 00:13:06.151 passed 00:13:06.151 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 09:53:43.384402] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:06.151 passed 00:13:06.151 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:06.151 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:06.151 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:06.151 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed[2024-05-15 09:53:43.385075] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:06.151 00:13:06.151 Test: generate copy: DIF generated, GUARD check ...passed 00:13:06.151 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:06.151 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:06.151 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:06.151 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:06.151 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:06.151 Test: generate copy: iovecs-len validate ...[2024-05-15 09:53:43.386203] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:06.151 passed 00:13:06.151 Test: generate copy: buffer alignment validate ...passed 00:13:06.151 00:13:06.151 Run Summary: Type Total Ran Passed Failed Inactive 00:13:06.151 suites 1 1 n/a 0 0 00:13:06.151 tests 20 20 20 0 0 00:13:06.151 asserts 204 204 204 0 n/a 00:13:06.151 00:13:06.151 Elapsed time = 0.009 seconds 00:13:06.409 00:13:06.409 real 0m0.905s 00:13:06.409 user 0m1.257s 00:13:06.409 sys 0m0.234s 00:13:06.409 09:53:43 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:06.409 ************************************ 00:13:06.409 END TEST accel_dif_functional_tests 00:13:06.409 ************************************ 00:13:06.409 09:53:43 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 00:13:06.666 real 0m40.446s 00:13:06.666 user 0m41.192s 00:13:06.666 sys 0m5.673s 00:13:06.666 09:53:43 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:06.666 09:53:43 accel -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 ************************************ 00:13:06.666 END TEST accel 00:13:06.666 ************************************ 00:13:06.666 09:53:43 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:06.666 09:53:43 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:06.666 09:53:43 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:06.666 09:53:43 -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 ************************************ 00:13:06.666 START TEST accel_rpc 00:13:06.666 ************************************ 00:13:06.666 09:53:43 accel_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:06.666 * Looking for test storage... 00:13:06.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:06.666 09:53:43 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:06.666 09:53:43 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64089 00:13:06.666 09:53:43 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64089 00:13:06.666 09:53:43 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:06.666 09:53:43 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 64089 ']' 00:13:06.666 09:53:43 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.666 09:53:43 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:06.666 09:53:43 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.666 09:53:43 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:06.666 09:53:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 [2024-05-15 09:53:44.030688] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:06.666 [2024-05-15 09:53:44.031057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64089 ] 00:13:06.925 [2024-05-15 09:53:44.175766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.183 [2024-05-15 09:53:44.355263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:13:08.117 09:53:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:08.117 09:53:45 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:08.117 09:53:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:08.117 09:53:45 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:08.117 09:53:45 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.117 ************************************ 00:13:08.117 START TEST accel_assign_opcode 00:13:08.117 ************************************ 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:08.117 [2024-05-15 09:53:45.156543] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:08.117 [2024-05-15 09:53:45.164541] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:08.117 software 00:13:08.117 00:13:08.117 real 0m0.277s 00:13:08.117 user 0m0.056s 00:13:08.117 sys 0m0.012s 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:08.117 09:53:45 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:08.117 ************************************ 00:13:08.117 END TEST accel_assign_opcode 00:13:08.117 ************************************ 00:13:08.117 09:53:45 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64089 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 64089 ']' 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 64089 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 64089 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 64089' 00:13:08.117 killing process with pid 64089 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@966 -- # kill 64089 00:13:08.117 09:53:45 accel_rpc -- common/autotest_common.sh@971 -- # wait 64089 00:13:08.682 ************************************ 00:13:08.682 END TEST accel_rpc 00:13:08.682 ************************************ 00:13:08.682 00:13:08.682 real 0m2.014s 00:13:08.682 user 0m2.184s 00:13:08.682 sys 0m0.524s 00:13:08.682 09:53:45 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:08.682 09:53:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.682 09:53:45 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:08.682 09:53:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:08.682 09:53:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:08.682 09:53:45 -- common/autotest_common.sh@10 -- # set +x 00:13:08.682 ************************************ 00:13:08.682 START TEST app_cmdline 00:13:08.682 ************************************ 00:13:08.682 09:53:45 app_cmdline -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:08.682 * Looking for test storage... 00:13:08.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:08.682 09:53:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:08.682 09:53:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64200 00:13:08.682 09:53:46 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:08.682 09:53:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64200 00:13:08.682 09:53:46 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 64200 ']' 00:13:08.682 09:53:46 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.682 09:53:46 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:08.682 09:53:46 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.683 09:53:46 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:08.683 09:53:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:08.941 [2024-05-15 09:53:46.100071] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:08.941 [2024-05-15 09:53:46.101108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64200 ] 00:13:08.941 [2024-05-15 09:53:46.245682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.199 [2024-05-15 09:53:46.353965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.765 09:53:47 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:09.765 09:53:47 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:13:09.765 09:53:47 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:10.022 { 00:13:10.022 "fields": { 00:13:10.022 "commit": "567565736", 00:13:10.022 "major": 24, 00:13:10.022 "minor": 5, 00:13:10.022 "patch": 0, 00:13:10.022 "suffix": "-pre" 00:13:10.022 }, 00:13:10.022 "version": "SPDK v24.05-pre git sha1 567565736" 00:13:10.022 } 00:13:10.022 09:53:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:10.022 09:53:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:10.022 09:53:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:10.022 09:53:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:10.022 09:53:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:10.022 09:53:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:10.022 09:53:47 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:10.022 09:53:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:10.022 09:53:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:10.280 09:53:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:10.280 09:53:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:10.280 09:53:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:10.280 09:53:47 app_cmdline -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:10.538 2024/05/15 09:53:47 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:13:10.538 request: 00:13:10.538 { 00:13:10.538 "method": "env_dpdk_get_mem_stats", 00:13:10.538 "params": {} 00:13:10.538 } 00:13:10.538 Got JSON-RPC error response 00:13:10.538 GoRPCClient: error on JSON-RPC call 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:10.538 09:53:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64200 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 64200 ']' 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 64200 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 64200 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 64200' 00:13:10.538 killing process with pid 64200 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@966 -- # kill 64200 00:13:10.538 09:53:47 app_cmdline -- common/autotest_common.sh@971 -- # wait 64200 00:13:10.795 ************************************ 00:13:10.795 END TEST app_cmdline 00:13:10.795 ************************************ 00:13:10.795 00:13:10.795 real 0m2.223s 00:13:10.795 user 0m2.801s 00:13:10.795 sys 0m0.522s 00:13:10.795 09:53:48 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:10.795 09:53:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:11.052 09:53:48 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:11.052 09:53:48 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:13:11.052 09:53:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:11.052 09:53:48 -- common/autotest_common.sh@10 -- # set +x 00:13:11.052 ************************************ 00:13:11.052 START TEST version 00:13:11.052 ************************************ 00:13:11.052 09:53:48 version -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:11.052 * Looking for test storage... 00:13:11.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:11.052 09:53:48 version -- app/version.sh@17 -- # get_header_version major 00:13:11.052 09:53:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:11.052 09:53:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:11.052 09:53:48 version -- app/version.sh@14 -- # cut -f2 00:13:11.052 09:53:48 version -- app/version.sh@17 -- # major=24 00:13:11.052 09:53:48 version -- app/version.sh@18 -- # get_header_version minor 00:13:11.052 09:53:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:11.052 09:53:48 version -- app/version.sh@14 -- # cut -f2 00:13:11.052 09:53:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:11.052 09:53:48 version -- app/version.sh@18 -- # minor=5 00:13:11.052 09:53:48 version -- app/version.sh@19 -- # get_header_version patch 00:13:11.052 09:53:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:11.052 09:53:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:11.052 09:53:48 version -- app/version.sh@14 -- # cut -f2 00:13:11.052 09:53:48 version -- app/version.sh@19 -- # patch=0 00:13:11.052 09:53:48 version -- app/version.sh@20 -- # get_header_version suffix 00:13:11.052 09:53:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:11.052 09:53:48 version -- app/version.sh@14 -- # cut -f2 00:13:11.052 09:53:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:11.052 09:53:48 version -- app/version.sh@20 -- # suffix=-pre 00:13:11.052 09:53:48 version -- app/version.sh@22 -- # version=24.5 00:13:11.052 09:53:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:11.053 09:53:48 version -- app/version.sh@28 -- # version=24.5rc0 00:13:11.053 09:53:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:11.053 09:53:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:11.311 09:53:48 version -- app/version.sh@30 -- # py_version=24.5rc0 00:13:11.311 09:53:48 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:11.311 ************************************ 00:13:11.311 END TEST version 00:13:11.311 ************************************ 00:13:11.311 00:13:11.311 real 0m0.209s 00:13:11.311 user 0m0.112s 00:13:11.311 sys 0m0.133s 00:13:11.311 09:53:48 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:11.311 09:53:48 version -- common/autotest_common.sh@10 -- # set +x 00:13:11.311 09:53:48 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:13:11.311 09:53:48 -- spdk/autotest.sh@194 -- # uname -s 00:13:11.311 09:53:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:11.311 09:53:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:11.311 09:53:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:11.311 09:53:48 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:13:11.311 09:53:48 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:13:11.311 09:53:48 -- spdk/autotest.sh@256 -- # timing_exit lib 00:13:11.311 09:53:48 -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:11.311 09:53:48 -- common/autotest_common.sh@10 -- # set +x 00:13:11.311 09:53:48 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:13:11.311 09:53:48 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:13:11.311 09:53:48 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:13:11.311 09:53:48 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:13:11.311 09:53:48 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:13:11.311 09:53:48 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:13:11.311 09:53:48 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:11.311 09:53:48 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:11.311 09:53:48 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:11.311 09:53:48 -- common/autotest_common.sh@10 -- # set +x 00:13:11.311 ************************************ 00:13:11.311 START TEST nvmf_tcp 00:13:11.311 ************************************ 00:13:11.311 09:53:48 nvmf_tcp -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:11.311 * Looking for test storage... 00:13:11.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.311 09:53:48 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.311 09:53:48 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.311 09:53:48 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.311 09:53:48 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.311 09:53:48 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.311 09:53:48 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.311 09:53:48 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:13:11.311 09:53:48 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.311 09:53:48 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:11.570 09:53:48 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:11.570 09:53:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:11.570 09:53:48 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:11.570 09:53:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:11.570 09:53:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:11.570 09:53:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.570 ************************************ 00:13:11.570 START TEST nvmf_example 00:13:11.570 ************************************ 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:11.570 * Looking for test storage... 00:13:11.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:11.570 Cannot find device "nvmf_init_br" 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:11.570 Cannot find device "nvmf_tgt_br" 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.570 Cannot find device "nvmf_tgt_br2" 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:11.570 Cannot find device "nvmf_init_br" 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:13:11.570 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:11.570 Cannot find device "nvmf_tgt_br" 00:13:11.571 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:13:11.571 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:11.829 Cannot find device "nvmf_tgt_br2" 00:13:11.829 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:13:11.829 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:11.829 Cannot find device "nvmf_br" 00:13:11.829 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:13:11.829 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:11.829 Cannot find device "nvmf_init_if" 00:13:11.829 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:13:11.829 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:11.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.829 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:13:11.829 09:53:48 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:11.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:11.829 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:12.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:13:12.087 00:13:12.087 --- 10.0.0.2 ping statistics --- 00:13:12.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.087 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:12.087 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.087 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:13:12.087 00:13:12.087 --- 10.0.0.3 ping statistics --- 00:13:12.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.087 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:13:12.087 00:13:12.087 --- 10.0.0.1 ping statistics --- 00:13:12.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.087 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64558 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64558 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 64558 ']' 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:12.087 09:53:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.458 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:13.459 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.459 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:13.459 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.459 09:53:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:13.459 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:13.459 09:53:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:25.666 Initializing NVMe Controllers 00:13:25.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:25.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:25.666 Initialization complete. Launching workers. 00:13:25.666 ======================================================== 00:13:25.666 Latency(us) 00:13:25.666 Device Information : IOPS MiB/s Average min max 00:13:25.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11737.58 45.85 5452.39 714.20 61116.61 00:13:25.666 ======================================================== 00:13:25.666 Total : 11737.58 45.85 5452.39 714.20 61116.61 00:13:25.666 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.666 rmmod nvme_tcp 00:13:25.666 rmmod nvme_fabrics 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64558 ']' 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64558 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 64558 ']' 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 64558 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 64558 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:13:25.666 killing process with pid 64558 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 64558' 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 64558 00:13:25.666 09:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 64558 00:13:25.666 nvmf threads initialize successfully 00:13:25.666 bdev subsystem init successfully 00:13:25.666 created a nvmf target service 00:13:25.666 create targets's poll groups done 00:13:25.666 all subsystems of target started 00:13:25.666 nvmf target is running 00:13:25.666 all subsystems of target stopped 00:13:25.666 destroy targets's poll groups done 00:13:25.666 destroyed the nvmf target service 00:13:25.666 bdev subsystem finish successfully 00:13:25.666 nvmf threads destroy successfully 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:25.666 00:13:25.666 real 0m12.707s 00:13:25.666 user 0m44.987s 00:13:25.666 sys 0m2.317s 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:25.666 09:54:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:25.666 ************************************ 00:13:25.666 END TEST nvmf_example 00:13:25.666 ************************************ 00:13:25.666 09:54:01 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:25.666 09:54:01 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:25.666 09:54:01 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:25.667 09:54:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:25.667 ************************************ 00:13:25.667 START TEST nvmf_filesystem 00:13:25.667 ************************************ 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:25.667 * Looking for test storage... 00:13:25.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:25.667 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:25.667 #define SPDK_CONFIG_H 00:13:25.667 #define SPDK_CONFIG_APPS 1 00:13:25.667 #define SPDK_CONFIG_ARCH native 00:13:25.667 #undef SPDK_CONFIG_ASAN 00:13:25.667 #define SPDK_CONFIG_AVAHI 1 00:13:25.667 #undef SPDK_CONFIG_CET 00:13:25.667 #define SPDK_CONFIG_COVERAGE 1 00:13:25.667 #define SPDK_CONFIG_CROSS_PREFIX 00:13:25.667 #undef SPDK_CONFIG_CRYPTO 00:13:25.667 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:25.667 #undef SPDK_CONFIG_CUSTOMOCF 00:13:25.667 #undef SPDK_CONFIG_DAOS 00:13:25.667 #define SPDK_CONFIG_DAOS_DIR 00:13:25.667 #define SPDK_CONFIG_DEBUG 1 00:13:25.667 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:25.667 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:25.667 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:25.668 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:25.668 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:25.668 #undef SPDK_CONFIG_DPDK_UADK 00:13:25.668 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:25.668 #define SPDK_CONFIG_EXAMPLES 1 00:13:25.668 #undef SPDK_CONFIG_FC 00:13:25.668 #define SPDK_CONFIG_FC_PATH 00:13:25.668 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:25.668 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:25.668 #undef SPDK_CONFIG_FUSE 00:13:25.668 #undef SPDK_CONFIG_FUZZER 00:13:25.668 #define SPDK_CONFIG_FUZZER_LIB 00:13:25.668 #define SPDK_CONFIG_GOLANG 1 00:13:25.668 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:25.668 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:25.668 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:25.668 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:13:25.668 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:25.668 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:25.668 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:25.668 #define SPDK_CONFIG_IDXD 1 00:13:25.668 #undef SPDK_CONFIG_IDXD_KERNEL 00:13:25.668 #undef SPDK_CONFIG_IPSEC_MB 00:13:25.668 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:25.668 #define SPDK_CONFIG_ISAL 1 00:13:25.668 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:25.668 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:25.668 #define SPDK_CONFIG_LIBDIR 00:13:25.668 #undef SPDK_CONFIG_LTO 00:13:25.668 #define SPDK_CONFIG_MAX_LCORES 00:13:25.668 #define SPDK_CONFIG_NVME_CUSE 1 00:13:25.668 #undef SPDK_CONFIG_OCF 00:13:25.668 #define SPDK_CONFIG_OCF_PATH 00:13:25.668 #define SPDK_CONFIG_OPENSSL_PATH 00:13:25.668 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:25.668 #define SPDK_CONFIG_PGO_DIR 00:13:25.668 #undef SPDK_CONFIG_PGO_USE 00:13:25.668 #define SPDK_CONFIG_PREFIX /usr/local 00:13:25.668 #undef SPDK_CONFIG_RAID5F 00:13:25.668 #undef SPDK_CONFIG_RBD 00:13:25.668 #define SPDK_CONFIG_RDMA 1 00:13:25.668 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:25.668 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:25.668 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:25.668 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:25.668 #define SPDK_CONFIG_SHARED 1 00:13:25.668 #undef SPDK_CONFIG_SMA 00:13:25.668 #define SPDK_CONFIG_TESTS 1 00:13:25.668 #undef SPDK_CONFIG_TSAN 00:13:25.668 #define SPDK_CONFIG_UBLK 1 00:13:25.668 #define SPDK_CONFIG_UBSAN 1 00:13:25.668 #undef SPDK_CONFIG_UNIT_TESTS 00:13:25.668 #undef SPDK_CONFIG_URING 00:13:25.668 #define SPDK_CONFIG_URING_PATH 00:13:25.668 #undef SPDK_CONFIG_URING_ZNS 00:13:25.668 #define SPDK_CONFIG_USDT 1 00:13:25.668 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:25.668 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:25.668 #undef SPDK_CONFIG_VFIO_USER 00:13:25.668 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:25.668 #define SPDK_CONFIG_VHOST 1 00:13:25.668 #define SPDK_CONFIG_VIRTIO 1 00:13:25.668 #undef SPDK_CONFIG_VTUNE 00:13:25.668 #define SPDK_CONFIG_VTUNE_DIR 00:13:25.668 #define SPDK_CONFIG_WERROR 1 00:13:25.668 #define SPDK_CONFIG_WPDK_DIR 00:13:25.668 #undef SPDK_CONFIG_XNVME 00:13:25.668 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:25.668 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:25.669 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 64801 ]] 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 64801 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Ap9Hho 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.Ap9Hho/tests/target /tmp/spdk.Ap9Hho 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6259531776 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6262906880 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2492362752 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2505166848 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=15342862336 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4217061376 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=15342862336 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4217061376 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6262767616 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6262910976 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=143360 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=845074432 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=98488320 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1252577280 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1252581376 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=90769182720 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8933597184 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:13:25.670 * Looking for test storage... 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:13:25.670 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=15342862336 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.671 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:25.672 Cannot find device "nvmf_tgt_br" 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:25.672 Cannot find device "nvmf_tgt_br2" 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:25.672 Cannot find device "nvmf_tgt_br" 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:25.672 Cannot find device "nvmf_tgt_br2" 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:25.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:25.672 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:25.672 09:54:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:25.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:13:25.672 00:13:25.672 --- 10.0.0.2 ping statistics --- 00:13:25.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.672 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:25.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:25.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:13:25.672 00:13:25.672 --- 10.0.0.3 ping statistics --- 00:13:25.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.672 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:25.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:13:25.672 00:13:25.672 --- 10.0.0.1 ping statistics --- 00:13:25.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.672 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:25.672 ************************************ 00:13:25.672 START TEST nvmf_filesystem_no_in_capsule 00:13:25.672 ************************************ 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=64961 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 64961 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 64961 ']' 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:25.672 09:54:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.672 [2024-05-15 09:54:02.173453] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:25.672 [2024-05-15 09:54:02.173591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.672 [2024-05-15 09:54:02.321063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.672 [2024-05-15 09:54:02.488615] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.672 [2024-05-15 09:54:02.488701] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.672 [2024-05-15 09:54:02.488724] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.672 [2024-05-15 09:54:02.488742] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.672 [2024-05-15 09:54:02.488757] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.673 [2024-05-15 09:54:02.488907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.673 [2024-05-15 09:54:02.489414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.673 [2024-05-15 09:54:02.489843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.673 [2024-05-15 09:54:02.489858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:25.930 [2024-05-15 09:54:03.264177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:25.930 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.187 Malloc1 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.187 [2024-05-15 09:54:03.543162] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:26.187 [2024-05-15 09:54:03.543861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:26.187 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:13:26.443 { 00:13:26.443 "aliases": [ 00:13:26.443 "f98d712b-9d88-4851-a06f-a8e19b820b4a" 00:13:26.443 ], 00:13:26.443 "assigned_rate_limits": { 00:13:26.443 "r_mbytes_per_sec": 0, 00:13:26.443 "rw_ios_per_sec": 0, 00:13:26.443 "rw_mbytes_per_sec": 0, 00:13:26.443 "w_mbytes_per_sec": 0 00:13:26.443 }, 00:13:26.443 "block_size": 512, 00:13:26.443 "claim_type": "exclusive_write", 00:13:26.443 "claimed": true, 00:13:26.443 "driver_specific": {}, 00:13:26.443 "memory_domains": [ 00:13:26.443 { 00:13:26.443 "dma_device_id": "system", 00:13:26.443 "dma_device_type": 1 00:13:26.443 }, 00:13:26.443 { 00:13:26.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.443 "dma_device_type": 2 00:13:26.443 } 00:13:26.443 ], 00:13:26.443 "name": "Malloc1", 00:13:26.443 "num_blocks": 1048576, 00:13:26.443 "product_name": "Malloc disk", 00:13:26.443 "supported_io_types": { 00:13:26.443 "abort": true, 00:13:26.443 "compare": false, 00:13:26.443 "compare_and_write": false, 00:13:26.443 "flush": true, 00:13:26.443 "nvme_admin": false, 00:13:26.443 "nvme_io": false, 00:13:26.443 "read": true, 00:13:26.443 "reset": true, 00:13:26.443 "unmap": true, 00:13:26.443 "write": true, 00:13:26.443 "write_zeroes": true 00:13:26.443 }, 00:13:26.443 "uuid": "f98d712b-9d88-4851-a06f-a8e19b820b4a", 00:13:26.443 "zoned": false 00:13:26.443 } 00:13:26.443 ]' 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:26.443 09:54:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:29.025 09:54:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:29.961 09:54:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:29.961 09:54:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:29.961 09:54:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:29.961 09:54:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:29.961 09:54:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.961 ************************************ 00:13:29.961 START TEST filesystem_ext4 00:13:29.961 ************************************ 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:29.961 mke2fs 1.46.5 (30-Dec-2021) 00:13:29.961 Discarding device blocks: 0/522240 done 00:13:29.961 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:29.961 Filesystem UUID: b3d35ab0-12b4-467c-b0a6-193a24987a47 00:13:29.961 Superblock backups stored on blocks: 00:13:29.961 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:29.961 00:13:29.961 Allocating group tables: 0/64 done 00:13:29.961 Writing inode tables: 0/64 done 00:13:29.961 Creating journal (8192 blocks): done 00:13:29.961 Writing superblocks and filesystem accounting information: 0/64 done 00:13:29.961 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:29.961 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 64961 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.219 00:13:30.219 real 0m0.420s 00:13:30.219 user 0m0.023s 00:13:30.219 sys 0m0.062s 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:30.219 ************************************ 00:13:30.219 END TEST filesystem_ext4 00:13:30.219 ************************************ 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.219 ************************************ 00:13:30.219 START TEST filesystem_btrfs 00:13:30.219 ************************************ 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:13:30.219 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:30.477 btrfs-progs v6.6.2 00:13:30.477 See https://btrfs.readthedocs.io for more information. 00:13:30.477 00:13:30.477 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:30.477 NOTE: several default settings have changed in version 5.15, please make sure 00:13:30.477 this does not affect your deployments: 00:13:30.477 - DUP for metadata (-m dup) 00:13:30.477 - enabled no-holes (-O no-holes) 00:13:30.477 - enabled free-space-tree (-R free-space-tree) 00:13:30.477 00:13:30.477 Label: (null) 00:13:30.477 UUID: c1f68a54-4dd1-4ad7-926e-dcc8d6c3c912 00:13:30.477 Node size: 16384 00:13:30.477 Sector size: 4096 00:13:30.477 Filesystem size: 510.00MiB 00:13:30.477 Block group profiles: 00:13:30.477 Data: single 8.00MiB 00:13:30.477 Metadata: DUP 32.00MiB 00:13:30.477 System: DUP 8.00MiB 00:13:30.477 SSD detected: yes 00:13:30.477 Zoned device: no 00:13:30.477 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:30.477 Runtime features: free-space-tree 00:13:30.477 Checksum: crc32c 00:13:30.477 Number of devices: 1 00:13:30.477 Devices: 00:13:30.477 ID SIZE PATH 00:13:30.477 1 510.00MiB /dev/nvme0n1p1 00:13:30.477 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 64961 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.477 00:13:30.477 real 0m0.293s 00:13:30.477 user 0m0.025s 00:13:30.477 sys 0m0.065s 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:30.477 ************************************ 00:13:30.477 END TEST filesystem_btrfs 00:13:30.477 ************************************ 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.477 ************************************ 00:13:30.477 START TEST filesystem_xfs 00:13:30.477 ************************************ 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:13:30.477 09:54:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:30.733 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:30.733 = sectsz=512 attr=2, projid32bit=1 00:13:30.733 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:30.733 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:30.733 data = bsize=4096 blocks=130560, imaxpct=25 00:13:30.733 = sunit=0 swidth=0 blks 00:13:30.733 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:30.733 log =internal log bsize=4096 blocks=16384, version=2 00:13:30.733 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:30.733 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:31.297 Discarding blocks...Done. 00:13:31.297 09:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:13:31.297 09:54:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 64961 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:34.569 00:13:34.569 real 0m3.459s 00:13:34.569 user 0m0.028s 00:13:34.569 sys 0m0.056s 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:34.569 ************************************ 00:13:34.569 END TEST filesystem_xfs 00:13:34.569 ************************************ 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 64961 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 64961 ']' 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 64961 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 64961 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:34.569 killing process with pid 64961 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 64961' 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 64961 00:13:34.569 [2024-05-15 09:54:11.482816] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:34.569 09:54:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 64961 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:35.135 00:13:35.135 real 0m10.139s 00:13:35.135 user 0m36.777s 00:13:35.135 sys 0m2.520s 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:35.135 ************************************ 00:13:35.135 END TEST nvmf_filesystem_no_in_capsule 00:13:35.135 ************************************ 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:35.135 ************************************ 00:13:35.135 START TEST nvmf_filesystem_in_capsule 00:13:35.135 ************************************ 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65279 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65279 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 65279 ']' 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:35.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:35.135 09:54:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:35.135 [2024-05-15 09:54:12.346672] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:35.135 [2024-05-15 09:54:12.346761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.135 [2024-05-15 09:54:12.505303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.392 [2024-05-15 09:54:12.679341] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.392 [2024-05-15 09:54:12.679416] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.392 [2024-05-15 09:54:12.679429] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.392 [2024-05-15 09:54:12.679439] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.392 [2024-05-15 09:54:12.679449] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.392 [2024-05-15 09:54:12.679582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.392 [2024-05-15 09:54:12.679663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.392 [2024-05-15 09:54:12.680130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.392 [2024-05-15 09:54:12.680137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.345 [2024-05-15 09:54:13.391786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.345 Malloc1 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.345 [2024-05-15 09:54:13.677928] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:36.345 [2024-05-15 09:54:13.678355] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:13:36.345 { 00:13:36.345 "aliases": [ 00:13:36.345 "8efc97e3-725f-4a24-b46d-d30322ad9521" 00:13:36.345 ], 00:13:36.345 "assigned_rate_limits": { 00:13:36.345 "r_mbytes_per_sec": 0, 00:13:36.345 "rw_ios_per_sec": 0, 00:13:36.345 "rw_mbytes_per_sec": 0, 00:13:36.345 "w_mbytes_per_sec": 0 00:13:36.345 }, 00:13:36.345 "block_size": 512, 00:13:36.345 "claim_type": "exclusive_write", 00:13:36.345 "claimed": true, 00:13:36.345 "driver_specific": {}, 00:13:36.345 "memory_domains": [ 00:13:36.345 { 00:13:36.345 "dma_device_id": "system", 00:13:36.345 "dma_device_type": 1 00:13:36.345 }, 00:13:36.345 { 00:13:36.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.345 "dma_device_type": 2 00:13:36.345 } 00:13:36.345 ], 00:13:36.345 "name": "Malloc1", 00:13:36.345 "num_blocks": 1048576, 00:13:36.345 "product_name": "Malloc disk", 00:13:36.345 "supported_io_types": { 00:13:36.345 "abort": true, 00:13:36.345 "compare": false, 00:13:36.345 "compare_and_write": false, 00:13:36.345 "flush": true, 00:13:36.345 "nvme_admin": false, 00:13:36.345 "nvme_io": false, 00:13:36.345 "read": true, 00:13:36.345 "reset": true, 00:13:36.345 "unmap": true, 00:13:36.345 "write": true, 00:13:36.345 "write_zeroes": true 00:13:36.345 }, 00:13:36.345 "uuid": "8efc97e3-725f-4a24-b46d-d30322ad9521", 00:13:36.345 "zoned": false 00:13:36.345 } 00:13:36.345 ]' 00:13:36.345 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:13:36.603 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:13:36.603 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:13:36.603 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:13:36.603 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:13:36.603 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:13:36.603 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:36.603 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.860 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:36.860 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:13:36.860 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.860 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:36.860 09:54:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:13:38.758 09:54:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:38.758 09:54:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:38.758 09:54:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.758 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:38.759 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:39.015 09:54:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:39.947 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:39.947 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:39.947 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:39.947 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:39.947 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:39.947 ************************************ 00:13:39.947 START TEST filesystem_in_capsule_ext4 00:13:39.947 ************************************ 00:13:39.947 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:39.947 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:13:39.948 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:39.948 mke2fs 1.46.5 (30-Dec-2021) 00:13:39.948 Discarding device blocks: 0/522240 done 00:13:39.948 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:39.948 Filesystem UUID: fcbfd76e-e58f-4db2-ba02-27e213158ccc 00:13:39.948 Superblock backups stored on blocks: 00:13:39.948 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:39.948 00:13:39.948 Allocating group tables: 0/64 done 00:13:39.948 Writing inode tables: 0/64 done 00:13:40.205 Creating journal (8192 blocks): done 00:13:40.205 Writing superblocks and filesystem accounting information: 0/64 done 00:13:40.205 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65279 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:40.205 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:40.463 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:40.464 00:13:40.464 real 0m0.436s 00:13:40.464 user 0m0.019s 00:13:40.464 sys 0m0.061s 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:40.464 ************************************ 00:13:40.464 END TEST filesystem_in_capsule_ext4 00:13:40.464 ************************************ 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.464 ************************************ 00:13:40.464 START TEST filesystem_in_capsule_btrfs 00:13:40.464 ************************************ 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:13:40.464 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:40.722 btrfs-progs v6.6.2 00:13:40.722 See https://btrfs.readthedocs.io for more information. 00:13:40.722 00:13:40.722 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:40.722 NOTE: several default settings have changed in version 5.15, please make sure 00:13:40.722 this does not affect your deployments: 00:13:40.722 - DUP for metadata (-m dup) 00:13:40.722 - enabled no-holes (-O no-holes) 00:13:40.722 - enabled free-space-tree (-R free-space-tree) 00:13:40.722 00:13:40.722 Label: (null) 00:13:40.722 UUID: c85259d5-f18f-43b7-a5e1-72a184b41476 00:13:40.722 Node size: 16384 00:13:40.722 Sector size: 4096 00:13:40.722 Filesystem size: 510.00MiB 00:13:40.722 Block group profiles: 00:13:40.722 Data: single 8.00MiB 00:13:40.722 Metadata: DUP 32.00MiB 00:13:40.722 System: DUP 8.00MiB 00:13:40.722 SSD detected: yes 00:13:40.722 Zoned device: no 00:13:40.722 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:40.722 Runtime features: free-space-tree 00:13:40.722 Checksum: crc32c 00:13:40.722 Number of devices: 1 00:13:40.722 Devices: 00:13:40.722 ID SIZE PATH 00:13:40.722 1 510.00MiB /dev/nvme0n1p1 00:13:40.722 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65279 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:40.722 00:13:40.722 real 0m0.292s 00:13:40.722 user 0m0.026s 00:13:40.722 sys 0m0.070s 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:40.722 ************************************ 00:13:40.722 END TEST filesystem_in_capsule_btrfs 00:13:40.722 ************************************ 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.722 ************************************ 00:13:40.722 START TEST filesystem_in_capsule_xfs 00:13:40.722 ************************************ 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:13:40.722 09:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:40.979 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:40.979 = sectsz=512 attr=2, projid32bit=1 00:13:40.979 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:40.979 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:40.979 data = bsize=4096 blocks=130560, imaxpct=25 00:13:40.979 = sunit=0 swidth=0 blks 00:13:40.979 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:40.979 log =internal log bsize=4096 blocks=16384, version=2 00:13:40.979 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:40.979 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:41.543 Discarding blocks...Done. 00:13:41.543 09:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:13:41.543 09:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65279 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:43.451 00:13:43.451 real 0m2.682s 00:13:43.451 user 0m0.016s 00:13:43.451 sys 0m0.059s 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:43.451 ************************************ 00:13:43.451 END TEST filesystem_in_capsule_xfs 00:13:43.451 ************************************ 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65279 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 65279 ']' 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 65279 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:43.451 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 65279 00:13:43.708 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:43.708 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:43.708 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 65279' 00:13:43.708 killing process with pid 65279 00:13:43.708 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 65279 00:13:43.708 [2024-05-15 09:54:20.836786] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:43.708 09:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 65279 00:13:44.272 09:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:44.272 00:13:44.272 real 0m9.227s 00:13:44.272 user 0m33.491s 00:13:44.272 sys 0m2.363s 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.273 ************************************ 00:13:44.273 END TEST nvmf_filesystem_in_capsule 00:13:44.273 ************************************ 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:44.273 rmmod nvme_tcp 00:13:44.273 rmmod nvme_fabrics 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.273 09:54:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.530 09:54:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:44.530 00:13:44.530 real 0m20.200s 00:13:44.530 user 1m10.513s 00:13:44.530 sys 0m5.298s 00:13:44.530 09:54:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:44.530 ************************************ 00:13:44.530 END TEST nvmf_filesystem 00:13:44.530 09:54:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:44.530 ************************************ 00:13:44.530 09:54:21 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:44.530 09:54:21 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:44.530 09:54:21 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:44.530 09:54:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:44.530 ************************************ 00:13:44.530 START TEST nvmf_target_discovery 00:13:44.530 ************************************ 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:44.530 * Looking for test storage... 00:13:44.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.530 09:54:21 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:44.531 Cannot find device "nvmf_tgt_br" 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.531 Cannot find device "nvmf_tgt_br2" 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:44.531 Cannot find device "nvmf_tgt_br" 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:44.531 Cannot find device "nvmf_tgt_br2" 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:13:44.531 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.789 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.789 09:54:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:44.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:13:44.789 00:13:44.789 --- 10.0.0.2 ping statistics --- 00:13:44.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.789 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:44.789 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:44.789 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.789 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:13:44.789 00:13:44.789 --- 10.0.0.3 ping statistics --- 00:13:44.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.789 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:45.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:13:45.055 00:13:45.055 --- 10.0.0.1 ping statistics --- 00:13:45.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.055 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=65742 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 65742 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 65742 ']' 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:45.055 09:54:22 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:45.055 [2024-05-15 09:54:22.257062] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:45.055 [2024-05-15 09:54:22.257336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.055 [2024-05-15 09:54:22.395167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.312 [2024-05-15 09:54:22.559226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.312 [2024-05-15 09:54:22.559563] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.312 [2024-05-15 09:54:22.559630] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.312 [2024-05-15 09:54:22.559735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.312 [2024-05-15 09:54:22.559822] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.312 [2024-05-15 09:54:22.560056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.312 [2024-05-15 09:54:22.560135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.312 [2024-05-15 09:54:22.560378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.313 [2024-05-15 09:54:22.560500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.878 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:45.878 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:13:45.878 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:45.878 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:45.878 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 [2024-05-15 09:54:23.289261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 Null1 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 [2024-05-15 09:54:23.362399] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:46.135 [2024-05-15 09:54:23.363076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 Null2 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 Null3 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 Null4 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.135 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.136 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 4420 00:13:46.394 00:13:46.394 Discovery Log Number of Records 6, Generation counter 6 00:13:46.394 =====Discovery Log Entry 0====== 00:13:46.394 trtype: tcp 00:13:46.394 adrfam: ipv4 00:13:46.394 subtype: current discovery subsystem 00:13:46.394 treq: not required 00:13:46.394 portid: 0 00:13:46.394 trsvcid: 4420 00:13:46.394 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:46.394 traddr: 10.0.0.2 00:13:46.394 eflags: explicit discovery connections, duplicate discovery information 00:13:46.394 sectype: none 00:13:46.394 =====Discovery Log Entry 1====== 00:13:46.394 trtype: tcp 00:13:46.394 adrfam: ipv4 00:13:46.394 subtype: nvme subsystem 00:13:46.394 treq: not required 00:13:46.394 portid: 0 00:13:46.394 trsvcid: 4420 00:13:46.394 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:46.394 traddr: 10.0.0.2 00:13:46.394 eflags: none 00:13:46.394 sectype: none 00:13:46.394 =====Discovery Log Entry 2====== 00:13:46.394 trtype: tcp 00:13:46.394 adrfam: ipv4 00:13:46.394 subtype: nvme subsystem 00:13:46.394 treq: not required 00:13:46.394 portid: 0 00:13:46.394 trsvcid: 4420 00:13:46.394 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:46.394 traddr: 10.0.0.2 00:13:46.394 eflags: none 00:13:46.394 sectype: none 00:13:46.394 =====Discovery Log Entry 3====== 00:13:46.394 trtype: tcp 00:13:46.394 adrfam: ipv4 00:13:46.394 subtype: nvme subsystem 00:13:46.394 treq: not required 00:13:46.394 portid: 0 00:13:46.394 trsvcid: 4420 00:13:46.394 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:46.394 traddr: 10.0.0.2 00:13:46.394 eflags: none 00:13:46.394 sectype: none 00:13:46.394 =====Discovery Log Entry 4====== 00:13:46.394 trtype: tcp 00:13:46.394 adrfam: ipv4 00:13:46.394 subtype: nvme subsystem 00:13:46.394 treq: not required 00:13:46.394 portid: 0 00:13:46.394 trsvcid: 4420 00:13:46.394 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:46.394 traddr: 10.0.0.2 00:13:46.394 eflags: none 00:13:46.394 sectype: none 00:13:46.394 =====Discovery Log Entry 5====== 00:13:46.394 trtype: tcp 00:13:46.394 adrfam: ipv4 00:13:46.394 subtype: discovery subsystem referral 00:13:46.394 treq: not required 00:13:46.394 portid: 0 00:13:46.394 trsvcid: 4430 00:13:46.394 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:46.394 traddr: 10.0.0.2 00:13:46.394 eflags: none 00:13:46.394 sectype: none 00:13:46.394 Perform nvmf subsystem discovery via RPC 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 [ 00:13:46.394 { 00:13:46.394 "allow_any_host": true, 00:13:46.394 "hosts": [], 00:13:46.394 "listen_addresses": [ 00:13:46.394 { 00:13:46.394 "adrfam": "IPv4", 00:13:46.394 "traddr": "10.0.0.2", 00:13:46.394 "trsvcid": "4420", 00:13:46.394 "trtype": "TCP" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:46.394 "subtype": "Discovery" 00:13:46.394 }, 00:13:46.394 { 00:13:46.394 "allow_any_host": true, 00:13:46.394 "hosts": [], 00:13:46.394 "listen_addresses": [ 00:13:46.394 { 00:13:46.394 "adrfam": "IPv4", 00:13:46.394 "traddr": "10.0.0.2", 00:13:46.394 "trsvcid": "4420", 00:13:46.394 "trtype": "TCP" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "max_cntlid": 65519, 00:13:46.394 "max_namespaces": 32, 00:13:46.394 "min_cntlid": 1, 00:13:46.394 "model_number": "SPDK bdev Controller", 00:13:46.394 "namespaces": [ 00:13:46.394 { 00:13:46.394 "bdev_name": "Null1", 00:13:46.394 "name": "Null1", 00:13:46.394 "nguid": "2423269312734DD8A01ABFE56746802E", 00:13:46.394 "nsid": 1, 00:13:46.394 "uuid": "24232693-1273-4dd8-a01a-bfe56746802e" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:46.394 "serial_number": "SPDK00000000000001", 00:13:46.394 "subtype": "NVMe" 00:13:46.394 }, 00:13:46.394 { 00:13:46.394 "allow_any_host": true, 00:13:46.394 "hosts": [], 00:13:46.394 "listen_addresses": [ 00:13:46.394 { 00:13:46.394 "adrfam": "IPv4", 00:13:46.394 "traddr": "10.0.0.2", 00:13:46.394 "trsvcid": "4420", 00:13:46.394 "trtype": "TCP" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "max_cntlid": 65519, 00:13:46.394 "max_namespaces": 32, 00:13:46.394 "min_cntlid": 1, 00:13:46.394 "model_number": "SPDK bdev Controller", 00:13:46.394 "namespaces": [ 00:13:46.394 { 00:13:46.394 "bdev_name": "Null2", 00:13:46.394 "name": "Null2", 00:13:46.394 "nguid": "6F3DFDBF150E4E0584BDD32F47A1E83E", 00:13:46.394 "nsid": 1, 00:13:46.394 "uuid": "6f3dfdbf-150e-4e05-84bd-d32f47a1e83e" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:46.394 "serial_number": "SPDK00000000000002", 00:13:46.394 "subtype": "NVMe" 00:13:46.394 }, 00:13:46.394 { 00:13:46.394 "allow_any_host": true, 00:13:46.394 "hosts": [], 00:13:46.394 "listen_addresses": [ 00:13:46.394 { 00:13:46.394 "adrfam": "IPv4", 00:13:46.394 "traddr": "10.0.0.2", 00:13:46.394 "trsvcid": "4420", 00:13:46.394 "trtype": "TCP" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "max_cntlid": 65519, 00:13:46.394 "max_namespaces": 32, 00:13:46.394 "min_cntlid": 1, 00:13:46.394 "model_number": "SPDK bdev Controller", 00:13:46.394 "namespaces": [ 00:13:46.394 { 00:13:46.394 "bdev_name": "Null3", 00:13:46.394 "name": "Null3", 00:13:46.394 "nguid": "199BD6D22CA746D9AC769E5021A7F493", 00:13:46.394 "nsid": 1, 00:13:46.394 "uuid": "199bd6d2-2ca7-46d9-ac76-9e5021a7f493" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:46.394 "serial_number": "SPDK00000000000003", 00:13:46.394 "subtype": "NVMe" 00:13:46.394 }, 00:13:46.394 { 00:13:46.394 "allow_any_host": true, 00:13:46.394 "hosts": [], 00:13:46.394 "listen_addresses": [ 00:13:46.394 { 00:13:46.394 "adrfam": "IPv4", 00:13:46.394 "traddr": "10.0.0.2", 00:13:46.394 "trsvcid": "4420", 00:13:46.394 "trtype": "TCP" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "max_cntlid": 65519, 00:13:46.394 "max_namespaces": 32, 00:13:46.394 "min_cntlid": 1, 00:13:46.394 "model_number": "SPDK bdev Controller", 00:13:46.394 "namespaces": [ 00:13:46.394 { 00:13:46.394 "bdev_name": "Null4", 00:13:46.394 "name": "Null4", 00:13:46.394 "nguid": "AAA926D90A204D89A63FA2580E7E0EB7", 00:13:46.394 "nsid": 1, 00:13:46.394 "uuid": "aaa926d9-0a20-4d89-a63f-a2580e7e0eb7" 00:13:46.394 } 00:13:46.394 ], 00:13:46.394 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:46.394 "serial_number": "SPDK00000000000004", 00:13:46.394 "subtype": "NVMe" 00:13:46.394 } 00:13:46.394 ] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.394 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.395 rmmod nvme_tcp 00:13:46.395 rmmod nvme_fabrics 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 65742 ']' 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 65742 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 65742 ']' 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 65742 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:46.395 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 65742 00:13:46.653 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:46.653 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:46.653 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 65742' 00:13:46.653 killing process with pid 65742 00:13:46.653 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 65742 00:13:46.653 [2024-05-15 09:54:23.782563] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:46.653 09:54:23 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 65742 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:46.910 00:13:46.910 real 0m2.483s 00:13:46.910 user 0m6.237s 00:13:46.910 sys 0m0.733s 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:46.910 09:54:24 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.910 ************************************ 00:13:46.910 END TEST nvmf_target_discovery 00:13:46.910 ************************************ 00:13:46.910 09:54:24 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:46.910 09:54:24 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:46.910 09:54:24 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:46.910 09:54:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:46.910 ************************************ 00:13:46.910 START TEST nvmf_referrals 00:13:46.910 ************************************ 00:13:46.910 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:47.202 * Looking for test storage... 00:13:47.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:47.202 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:47.202 Cannot find device "nvmf_tgt_br" 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:47.203 Cannot find device "nvmf_tgt_br2" 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:47.203 Cannot find device "nvmf_tgt_br" 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:47.203 Cannot find device "nvmf_tgt_br2" 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:47.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:47.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:47.203 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:47.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:13:47.483 00:13:47.483 --- 10.0.0.2 ping statistics --- 00:13:47.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.483 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:47.483 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:47.483 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:13:47.483 00:13:47.483 --- 10.0.0.3 ping statistics --- 00:13:47.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.483 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:47.483 00:13:47.483 --- 10.0.0.1 ping statistics --- 00:13:47.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.483 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=65972 00:13:47.483 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:47.484 09:54:24 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 65972 00:13:47.484 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 65972 ']' 00:13:47.484 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.484 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:47.484 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.484 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:47.484 09:54:24 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:47.484 [2024-05-15 09:54:24.836218] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:47.484 [2024-05-15 09:54:24.836352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.741 [2024-05-15 09:54:24.978084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.999 [2024-05-15 09:54:25.142961] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.999 [2024-05-15 09:54:25.143038] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.999 [2024-05-15 09:54:25.143051] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.999 [2024-05-15 09:54:25.143061] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.999 [2024-05-15 09:54:25.143070] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.999 [2024-05-15 09:54:25.143188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.999 [2024-05-15 09:54:25.143263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.999 [2024-05-15 09:54:25.143839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.999 [2024-05-15 09:54:25.143859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.930 09:54:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:48.930 09:54:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:13:48.930 09:54:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.930 09:54:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:48.931 09:54:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.931 09:54:25 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.931 09:54:25 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.931 09:54:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.931 09:54:25 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.931 [2024-05-15 09:54:26.009871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.931 [2024-05-15 09:54:26.036302] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:48.931 [2024-05-15 09:54:26.036682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.931 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.188 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.446 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.704 09:54:26 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:49.704 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:49.961 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:49.961 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:49.961 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:49.961 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:49.961 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.962 rmmod nvme_tcp 00:13:49.962 rmmod nvme_fabrics 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 65972 ']' 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 65972 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 65972 ']' 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 65972 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 65972 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 65972' 00:13:49.962 killing process with pid 65972 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 65972 00:13:49.962 [2024-05-15 09:54:27.261904] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:49.962 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 65972 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:50.526 00:13:50.526 real 0m3.417s 00:13:50.526 user 0m10.555s 00:13:50.526 sys 0m1.036s 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:50.526 ************************************ 00:13:50.526 END TEST nvmf_referrals 00:13:50.526 ************************************ 00:13:50.526 09:54:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:50.526 09:54:27 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:50.526 09:54:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:50.526 09:54:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:50.526 09:54:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:50.526 ************************************ 00:13:50.526 START TEST nvmf_connect_disconnect 00:13:50.526 ************************************ 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:50.526 * Looking for test storage... 00:13:50.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:50.526 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:50.527 Cannot find device "nvmf_tgt_br" 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.527 Cannot find device "nvmf_tgt_br2" 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:50.527 Cannot find device "nvmf_tgt_br" 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:50.527 Cannot find device "nvmf_tgt_br2" 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:13:50.527 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:50.784 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:50.784 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.784 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:13:50.784 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.784 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.784 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:13:50.784 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.784 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.784 09:54:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.784 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:51.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:13:51.041 00:13:51.041 --- 10.0.0.2 ping statistics --- 00:13:51.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.041 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:51.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:51.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:13:51.041 00:13:51.041 --- 10.0.0.3 ping statistics --- 00:13:51.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.041 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:51.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:13:51.041 00:13:51.041 --- 10.0.0.1 ping statistics --- 00:13:51.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.041 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66274 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66274 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 66274 ']' 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:51.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:51.041 09:54:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:51.041 [2024-05-15 09:54:28.293789] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:13:51.041 [2024-05-15 09:54:28.293929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.298 [2024-05-15 09:54:28.438728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.298 [2024-05-15 09:54:28.608503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.298 [2024-05-15 09:54:28.608614] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.298 [2024-05-15 09:54:28.608628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.298 [2024-05-15 09:54:28.608639] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.298 [2024-05-15 09:54:28.608648] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.298 [2024-05-15 09:54:28.608790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.298 [2024-05-15 09:54:28.608887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.298 [2024-05-15 09:54:28.609447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.298 [2024-05-15 09:54:28.609456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.229 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:52.230 [2024-05-15 09:54:29.345019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:52.230 [2024-05-15 09:54:29.422686] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:52.230 [2024-05-15 09:54:29.423028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:52.230 09:54:29 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:54.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.893 rmmod nvme_tcp 00:14:03.893 rmmod nvme_fabrics 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66274 ']' 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66274 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 66274 ']' 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 66274 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 66274 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:03.893 killing process with pid 66274 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 66274' 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 66274 00:14:03.893 [2024-05-15 09:54:40.929314] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:03.893 09:54:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 66274 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:03.893 00:14:03.893 real 0m13.554s 00:14:03.893 user 0m48.712s 00:14:03.893 sys 0m2.486s 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:03.893 09:54:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:03.893 ************************************ 00:14:03.893 END TEST nvmf_connect_disconnect 00:14:03.893 ************************************ 00:14:04.151 09:54:41 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:04.151 09:54:41 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:04.151 09:54:41 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:04.151 09:54:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.151 ************************************ 00:14:04.151 START TEST nvmf_multitarget 00:14:04.151 ************************************ 00:14:04.151 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:04.151 * Looking for test storage... 00:14:04.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:04.151 09:54:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.151 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:04.151 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.151 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.151 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.151 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.151 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:04.152 Cannot find device "nvmf_tgt_br" 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.152 Cannot find device "nvmf_tgt_br2" 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:04.152 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:04.411 Cannot find device "nvmf_tgt_br" 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:04.411 Cannot find device "nvmf_tgt_br2" 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:04.411 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:04.668 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:04.668 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:04.668 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:04.668 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:04.668 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:04.668 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:04.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:04.669 00:14:04.669 --- 10.0.0.2 ping statistics --- 00:14:04.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.669 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:04.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:04.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:04.669 00:14:04.669 --- 10.0.0.3 ping statistics --- 00:14:04.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.669 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:04.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:04.669 00:14:04.669 --- 10.0.0.1 ping statistics --- 00:14:04.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.669 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=66679 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 66679 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 66679 ']' 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:04.669 09:54:41 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:04.669 [2024-05-15 09:54:41.950811] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:14:04.669 [2024-05-15 09:54:41.950914] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.926 [2024-05-15 09:54:42.110933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.926 [2024-05-15 09:54:42.285831] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.926 [2024-05-15 09:54:42.285914] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.926 [2024-05-15 09:54:42.285930] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.926 [2024-05-15 09:54:42.285944] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.926 [2024-05-15 09:54:42.285968] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.926 [2024-05-15 09:54:42.286216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.926 [2024-05-15 09:54:42.286279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.926 [2024-05-15 09:54:42.286898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.926 [2024-05-15 09:54:42.286911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.883 09:54:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:05.884 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:06.144 "nvmf_tgt_1" 00:14:06.144 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:06.144 "nvmf_tgt_2" 00:14:06.144 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:06.144 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:06.402 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:06.402 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:06.402 true 00:14:06.659 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:06.659 true 00:14:06.659 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:06.659 09:54:43 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:06.917 rmmod nvme_tcp 00:14:06.917 rmmod nvme_fabrics 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 66679 ']' 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 66679 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 66679 ']' 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 66679 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 66679 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 66679' 00:14:06.917 killing process with pid 66679 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 66679 00:14:06.917 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 66679 00:14:07.484 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.484 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.484 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.485 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.485 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.485 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.485 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.485 09:54:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:07.485 00:14:07.485 real 0m3.307s 00:14:07.485 user 0m10.200s 00:14:07.485 sys 0m0.936s 00:14:07.485 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:07.485 09:54:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:07.485 ************************************ 00:14:07.485 END TEST nvmf_multitarget 00:14:07.485 ************************************ 00:14:07.485 09:54:44 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:07.485 09:54:44 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:07.485 09:54:44 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:07.485 09:54:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.485 ************************************ 00:14:07.485 START TEST nvmf_rpc 00:14:07.485 ************************************ 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:07.485 * Looking for test storage... 00:14:07.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:07.485 Cannot find device "nvmf_tgt_br" 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:14:07.485 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:07.753 Cannot find device "nvmf_tgt_br2" 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:07.753 Cannot find device "nvmf_tgt_br" 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:07.753 Cannot find device "nvmf_tgt_br2" 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:07.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:07.753 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:07.753 09:54:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:07.753 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:08.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:14:08.011 00:14:08.011 --- 10.0.0.2 ping statistics --- 00:14:08.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.011 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:08.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:08.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:08.011 00:14:08.011 --- 10.0.0.3 ping statistics --- 00:14:08.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.011 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:08.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:08.011 00:14:08.011 --- 10.0.0.1 ping statistics --- 00:14:08.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.011 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=66918 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 66918 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 66918 ']' 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:08.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:08.011 09:54:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.011 [2024-05-15 09:54:45.294550] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:14:08.011 [2024-05-15 09:54:45.294646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.269 [2024-05-15 09:54:45.456150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:08.269 [2024-05-15 09:54:45.630415] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.269 [2024-05-15 09:54:45.630517] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.269 [2024-05-15 09:54:45.630539] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.269 [2024-05-15 09:54:45.630555] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.269 [2024-05-15 09:54:45.630570] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.269 [2024-05-15 09:54:45.630778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:08.269 [2024-05-15 09:54:45.630906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:08.269 [2024-05-15 09:54:45.631396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:08.269 [2024-05-15 09:54:45.631407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:09.203 "poll_groups": [ 00:14:09.203 { 00:14:09.203 "admin_qpairs": 0, 00:14:09.203 "completed_nvme_io": 0, 00:14:09.203 "current_admin_qpairs": 0, 00:14:09.203 "current_io_qpairs": 0, 00:14:09.203 "io_qpairs": 0, 00:14:09.203 "name": "nvmf_tgt_poll_group_000", 00:14:09.203 "pending_bdev_io": 0, 00:14:09.203 "transports": [] 00:14:09.203 }, 00:14:09.203 { 00:14:09.203 "admin_qpairs": 0, 00:14:09.203 "completed_nvme_io": 0, 00:14:09.203 "current_admin_qpairs": 0, 00:14:09.203 "current_io_qpairs": 0, 00:14:09.203 "io_qpairs": 0, 00:14:09.203 "name": "nvmf_tgt_poll_group_001", 00:14:09.203 "pending_bdev_io": 0, 00:14:09.203 "transports": [] 00:14:09.203 }, 00:14:09.203 { 00:14:09.203 "admin_qpairs": 0, 00:14:09.203 "completed_nvme_io": 0, 00:14:09.203 "current_admin_qpairs": 0, 00:14:09.203 "current_io_qpairs": 0, 00:14:09.203 "io_qpairs": 0, 00:14:09.203 "name": "nvmf_tgt_poll_group_002", 00:14:09.203 "pending_bdev_io": 0, 00:14:09.203 "transports": [] 00:14:09.203 }, 00:14:09.203 { 00:14:09.203 "admin_qpairs": 0, 00:14:09.203 "completed_nvme_io": 0, 00:14:09.203 "current_admin_qpairs": 0, 00:14:09.203 "current_io_qpairs": 0, 00:14:09.203 "io_qpairs": 0, 00:14:09.203 "name": "nvmf_tgt_poll_group_003", 00:14:09.203 "pending_bdev_io": 0, 00:14:09.203 "transports": [] 00:14:09.203 } 00:14:09.203 ], 00:14:09.203 "tick_rate": 2100000000 00:14:09.203 }' 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.203 [2024-05-15 09:54:46.555879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.203 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.462 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.462 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:09.462 "poll_groups": [ 00:14:09.462 { 00:14:09.462 "admin_qpairs": 0, 00:14:09.462 "completed_nvme_io": 0, 00:14:09.462 "current_admin_qpairs": 0, 00:14:09.462 "current_io_qpairs": 0, 00:14:09.462 "io_qpairs": 0, 00:14:09.462 "name": "nvmf_tgt_poll_group_000", 00:14:09.462 "pending_bdev_io": 0, 00:14:09.462 "transports": [ 00:14:09.462 { 00:14:09.462 "trtype": "TCP" 00:14:09.462 } 00:14:09.462 ] 00:14:09.462 }, 00:14:09.462 { 00:14:09.462 "admin_qpairs": 0, 00:14:09.462 "completed_nvme_io": 0, 00:14:09.462 "current_admin_qpairs": 0, 00:14:09.462 "current_io_qpairs": 0, 00:14:09.462 "io_qpairs": 0, 00:14:09.462 "name": "nvmf_tgt_poll_group_001", 00:14:09.462 "pending_bdev_io": 0, 00:14:09.462 "transports": [ 00:14:09.462 { 00:14:09.462 "trtype": "TCP" 00:14:09.462 } 00:14:09.462 ] 00:14:09.462 }, 00:14:09.462 { 00:14:09.462 "admin_qpairs": 0, 00:14:09.462 "completed_nvme_io": 0, 00:14:09.462 "current_admin_qpairs": 0, 00:14:09.462 "current_io_qpairs": 0, 00:14:09.462 "io_qpairs": 0, 00:14:09.462 "name": "nvmf_tgt_poll_group_002", 00:14:09.462 "pending_bdev_io": 0, 00:14:09.462 "transports": [ 00:14:09.462 { 00:14:09.462 "trtype": "TCP" 00:14:09.463 } 00:14:09.463 ] 00:14:09.463 }, 00:14:09.463 { 00:14:09.463 "admin_qpairs": 0, 00:14:09.463 "completed_nvme_io": 0, 00:14:09.463 "current_admin_qpairs": 0, 00:14:09.463 "current_io_qpairs": 0, 00:14:09.463 "io_qpairs": 0, 00:14:09.463 "name": "nvmf_tgt_poll_group_003", 00:14:09.463 "pending_bdev_io": 0, 00:14:09.463 "transports": [ 00:14:09.463 { 00:14:09.463 "trtype": "TCP" 00:14:09.463 } 00:14:09.463 ] 00:14:09.463 } 00:14:09.463 ], 00:14:09.463 "tick_rate": 2100000000 00:14:09.463 }' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.463 Malloc1 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.463 [2024-05-15 09:54:46.767700] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:09.463 [2024-05-15 09:54:46.768066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -a 10.0.0.2 -s 4420 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -a 10.0.0.2 -s 4420 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -a 10.0.0.2 -s 4420 00:14:09.463 [2024-05-15 09:54:46.798417] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4' 00:14:09.463 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:09.463 could not add new controller: failed to write to nvme-fabrics device 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:09.463 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.721 09:54:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:09.721 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:09.721 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.721 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:09.721 09:54:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:11.619 09:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:11.886 09:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:11.886 09:54:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:11.886 [2024-05-15 09:54:49.139418] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4' 00:14:11.886 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:11.886 could not add new controller: failed to write to nvme-fabrics device 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:11.886 09:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.144 09:54:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.144 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:12.144 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.144 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:12.144 09:54:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:14.118 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:14.118 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.118 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:14.118 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:14.118 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.118 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:14.118 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.376 [2024-05-15 09:54:51.583675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:14.376 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:14.634 09:54:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.634 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:14.634 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.634 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:14.634 09:54:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.532 [2024-05-15 09:54:53.907963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.532 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.791 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.791 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.791 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:16.791 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.791 09:54:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:16.791 09:54:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.791 09:54:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.791 09:54:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:16.791 09:54:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.791 09:54:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:16.791 09:54:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 [2024-05-15 09:54:56.239898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:19.322 09:54:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:21.222 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.223 [2024-05-15 09:54:58.548717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:21.223 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:21.480 09:54:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:21.480 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:21.480 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:21.480 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:21.480 09:54:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:23.381 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:23.381 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:23.381 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.381 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:23.381 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.381 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:23.381 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.639 [2024-05-15 09:55:00.880282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:23.639 09:55:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.897 09:55:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:23.897 09:55:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:14:23.897 09:55:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.897 09:55:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:14:23.897 09:55:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:14:25.796 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 [2024-05-15 09:55:03.238270] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 [2024-05-15 09:55:03.310422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 [2024-05-15 09:55:03.366518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.054 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.055 [2024-05-15 09:55:03.426654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.055 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 [2024-05-15 09:55:03.478788] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:26.313 "poll_groups": [ 00:14:26.313 { 00:14:26.313 "admin_qpairs": 2, 00:14:26.313 "completed_nvme_io": 166, 00:14:26.313 "current_admin_qpairs": 0, 00:14:26.313 "current_io_qpairs": 0, 00:14:26.313 "io_qpairs": 16, 00:14:26.313 "name": "nvmf_tgt_poll_group_000", 00:14:26.313 "pending_bdev_io": 0, 00:14:26.313 "transports": [ 00:14:26.313 { 00:14:26.313 "trtype": "TCP" 00:14:26.313 } 00:14:26.313 ] 00:14:26.313 }, 00:14:26.313 { 00:14:26.313 "admin_qpairs": 3, 00:14:26.313 "completed_nvme_io": 67, 00:14:26.313 "current_admin_qpairs": 0, 00:14:26.313 "current_io_qpairs": 0, 00:14:26.313 "io_qpairs": 17, 00:14:26.313 "name": "nvmf_tgt_poll_group_001", 00:14:26.313 "pending_bdev_io": 0, 00:14:26.313 "transports": [ 00:14:26.313 { 00:14:26.313 "trtype": "TCP" 00:14:26.313 } 00:14:26.313 ] 00:14:26.313 }, 00:14:26.313 { 00:14:26.313 "admin_qpairs": 1, 00:14:26.313 "completed_nvme_io": 21, 00:14:26.313 "current_admin_qpairs": 0, 00:14:26.313 "current_io_qpairs": 0, 00:14:26.313 "io_qpairs": 19, 00:14:26.313 "name": "nvmf_tgt_poll_group_002", 00:14:26.313 "pending_bdev_io": 0, 00:14:26.313 "transports": [ 00:14:26.313 { 00:14:26.313 "trtype": "TCP" 00:14:26.313 } 00:14:26.313 ] 00:14:26.313 }, 00:14:26.313 { 00:14:26.313 "admin_qpairs": 1, 00:14:26.313 "completed_nvme_io": 166, 00:14:26.313 "current_admin_qpairs": 0, 00:14:26.313 "current_io_qpairs": 0, 00:14:26.313 "io_qpairs": 18, 00:14:26.313 "name": "nvmf_tgt_poll_group_003", 00:14:26.313 "pending_bdev_io": 0, 00:14:26.313 "transports": [ 00:14:26.313 { 00:14:26.313 "trtype": "TCP" 00:14:26.313 } 00:14:26.313 ] 00:14:26.313 } 00:14:26.313 ], 00:14:26.313 "tick_rate": 2100000000 00:14:26.313 }' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.313 rmmod nvme_tcp 00:14:26.313 rmmod nvme_fabrics 00:14:26.313 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 66918 ']' 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 66918 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 66918 ']' 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 66918 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 66918 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 66918' 00:14:26.572 killing process with pid 66918 00:14:26.572 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 66918 00:14:26.572 [2024-05-15 09:55:03.730538] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 09:55:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 66918 00:14:26.572 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:26.831 00:14:26.831 real 0m19.509s 00:14:26.831 user 1m11.383s 00:14:26.831 sys 0m3.866s 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:26.831 09:55:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.831 ************************************ 00:14:26.831 END TEST nvmf_rpc 00:14:26.831 ************************************ 00:14:27.090 09:55:04 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:27.090 09:55:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:27.090 09:55:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:27.090 09:55:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.090 ************************************ 00:14:27.090 START TEST nvmf_invalid 00:14:27.090 ************************************ 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:27.090 * Looking for test storage... 00:14:27.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.090 09:55:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:27.091 Cannot find device "nvmf_tgt_br" 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:14:27.091 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:27.349 Cannot find device "nvmf_tgt_br2" 00:14:27.349 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:14:27.349 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:27.349 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:27.349 Cannot find device "nvmf_tgt_br" 00:14:27.349 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:14:27.349 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:27.349 Cannot find device "nvmf_tgt_br2" 00:14:27.349 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:14:27.349 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:27.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:27.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:27.350 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:27.608 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:27.608 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:27.608 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:27.608 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:27.608 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:27.608 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:27.608 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:27.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:27.609 00:14:27.609 --- 10.0.0.2 ping statistics --- 00:14:27.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.609 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:27.609 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:27.609 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:14:27.609 00:14:27.609 --- 10.0.0.3 ping statistics --- 00:14:27.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.609 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:14:27.609 00:14:27.609 --- 10.0.0.1 ping statistics --- 00:14:27.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.609 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67435 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67435 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 67435 ']' 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:27.609 09:55:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:27.609 [2024-05-15 09:55:04.965210] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:14:27.609 [2024-05-15 09:55:04.965524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.866 [2024-05-15 09:55:05.125551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.866 [2024-05-15 09:55:05.246416] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.866 [2024-05-15 09:55:05.246705] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.866 [2024-05-15 09:55:05.246914] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.866 [2024-05-15 09:55:05.247034] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.866 [2024-05-15 09:55:05.247078] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.866 [2024-05-15 09:55:05.247305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.866 [2024-05-15 09:55:05.247413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.124 [2024-05-15 09:55:05.248168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.124 [2024-05-15 09:55:05.248179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.124 09:55:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:28.124 09:55:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:14:28.124 09:55:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.124 09:55:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:28.124 09:55:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:28.124 09:55:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.124 09:55:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:28.124 09:55:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29747 00:14:28.382 [2024-05-15 09:55:05.679440] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:28.382 09:55:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/05/15 09:55:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29747 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:28.382 request: 00:14:28.382 { 00:14:28.382 "method": "nvmf_create_subsystem", 00:14:28.382 "params": { 00:14:28.382 "nqn": "nqn.2016-06.io.spdk:cnode29747", 00:14:28.382 "tgt_name": "foobar" 00:14:28.382 } 00:14:28.382 } 00:14:28.382 Got JSON-RPC error response 00:14:28.382 GoRPCClient: error on JSON-RPC call' 00:14:28.382 09:55:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/05/15 09:55:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29747 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:14:28.382 request: 00:14:28.382 { 00:14:28.382 "method": "nvmf_create_subsystem", 00:14:28.382 "params": { 00:14:28.382 "nqn": "nqn.2016-06.io.spdk:cnode29747", 00:14:28.382 "tgt_name": "foobar" 00:14:28.382 } 00:14:28.382 } 00:14:28.382 Got JSON-RPC error response 00:14:28.382 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:28.382 09:55:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:28.382 09:55:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode26620 00:14:28.640 [2024-05-15 09:55:06.020058] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26620: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:28.898 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/05/15 09:55:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26620 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:28.898 request: 00:14:28.898 { 00:14:28.898 "method": "nvmf_create_subsystem", 00:14:28.898 "params": { 00:14:28.898 "nqn": "nqn.2016-06.io.spdk:cnode26620", 00:14:28.898 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:28.898 } 00:14:28.898 } 00:14:28.898 Got JSON-RPC error response 00:14:28.898 GoRPCClient: error on JSON-RPC call' 00:14:28.898 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/05/15 09:55:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode26620 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:14:28.898 request: 00:14:28.898 { 00:14:28.898 "method": "nvmf_create_subsystem", 00:14:28.898 "params": { 00:14:28.898 "nqn": "nqn.2016-06.io.spdk:cnode26620", 00:14:28.898 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:14:28.898 } 00:14:28.898 } 00:14:28.898 Got JSON-RPC error response 00:14:28.898 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:28.898 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:28.898 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18093 00:14:29.156 [2024-05-15 09:55:06.368578] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18093: invalid model number 'SPDK_Controller' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/05/15 09:55:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18093], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:29.156 request: 00:14:29.156 { 00:14:29.156 "method": "nvmf_create_subsystem", 00:14:29.156 "params": { 00:14:29.156 "nqn": "nqn.2016-06.io.spdk:cnode18093", 00:14:29.156 "model_number": "SPDK_Controller\u001f" 00:14:29.156 } 00:14:29.156 } 00:14:29.156 Got JSON-RPC error response 00:14:29.156 GoRPCClient: error on JSON-RPC call' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/05/15 09:55:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode18093], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:14:29.156 request: 00:14:29.156 { 00:14:29.156 "method": "nvmf_create_subsystem", 00:14:29.156 "params": { 00:14:29.156 "nqn": "nqn.2016-06.io.spdk:cnode18093", 00:14:29.156 "model_number": "SPDK_Controller\u001f" 00:14:29.156 } 00:14:29.156 } 00:14:29.156 Got JSON-RPC error response 00:14:29.156 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.156 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.157 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'c5N'\''h9u^3*DHd|E.n9=dW' 00:14:29.415 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'c5N'\''h9u^3*DHd|E.n9=dW' nqn.2016-06.io.spdk:cnode339 00:14:29.673 [2024-05-15 09:55:06.829120] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode339: invalid serial number 'c5N'h9u^3*DHd|E.n9=dW' 00:14:29.673 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/05/15 09:55:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode339 serial_number:c5N'\''h9u^3*DHd|E.n9=dW], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN c5N'\''h9u^3*DHd|E.n9=dW 00:14:29.673 request: 00:14:29.673 { 00:14:29.673 "method": "nvmf_create_subsystem", 00:14:29.673 "params": { 00:14:29.673 "nqn": "nqn.2016-06.io.spdk:cnode339", 00:14:29.673 "serial_number": "c5N'\''h9u^3*DHd|E.n9=dW" 00:14:29.673 } 00:14:29.673 } 00:14:29.673 Got JSON-RPC error response 00:14:29.673 GoRPCClient: error on JSON-RPC call' 00:14:29.673 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/05/15 09:55:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode339 serial_number:c5N'h9u^3*DHd|E.n9=dW], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN c5N'h9u^3*DHd|E.n9=dW 00:14:29.673 request: 00:14:29.673 { 00:14:29.673 "method": "nvmf_create_subsystem", 00:14:29.673 "params": { 00:14:29.673 "nqn": "nqn.2016-06.io.spdk:cnode339", 00:14:29.673 "serial_number": "c5N'h9u^3*DHd|E.n9=dW" 00:14:29.673 } 00:14:29.673 } 00:14:29.673 Got JSON-RPC error response 00:14:29.673 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:29.673 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:29.673 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.674 09:55:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:29.674 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:29.674 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:29.674 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.675 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '"c/>Xc\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p' 00:14:29.933 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '"c/>Xc\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p' nqn.2016-06.io.spdk:cnode7148 00:14:30.192 [2024-05-15 09:55:07.381740] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7148: invalid model number '"c/>Xc\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p' 00:14:30.192 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/05/15 09:55:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:"c/>Xc\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p nqn:nqn.2016-06.io.spdk:cnode7148], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN "c/>Xc\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p 00:14:30.192 request: 00:14:30.192 { 00:14:30.192 "method": "nvmf_create_subsystem", 00:14:30.192 "params": { 00:14:30.192 "nqn": "nqn.2016-06.io.spdk:cnode7148", 00:14:30.192 "model_number": "\"c/>Xc\\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p" 00:14:30.192 } 00:14:30.192 } 00:14:30.192 Got JSON-RPC error response 00:14:30.192 GoRPCClient: error on JSON-RPC call' 00:14:30.192 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/05/15 09:55:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:"c/>Xc\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p nqn:nqn.2016-06.io.spdk:cnode7148], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN "c/>Xc\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p 00:14:30.192 request: 00:14:30.192 { 00:14:30.192 "method": "nvmf_create_subsystem", 00:14:30.192 "params": { 00:14:30.192 "nqn": "nqn.2016-06.io.spdk:cnode7148", 00:14:30.192 "model_number": "\"c/>Xc\\Xaj+V)^l3]Op/Nwr`JALUsk1DF7 BB5c~p" 00:14:30.192 } 00:14:30.192 } 00:14:30.192 Got JSON-RPC error response 00:14:30.192 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:30.192 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:30.451 [2024-05-15 09:55:07.642247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.451 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:30.709 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:30.709 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:30.709 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:30.709 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:30.709 09:55:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:30.967 [2024-05-15 09:55:08.253295] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:30.967 [2024-05-15 09:55:08.253450] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:30.967 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/05/15 09:55:08 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:14:30.967 request: 00:14:30.967 { 00:14:30.967 "method": "nvmf_subsystem_remove_listener", 00:14:30.967 "params": { 00:14:30.967 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:30.967 "listen_address": { 00:14:30.967 "trtype": "tcp", 00:14:30.967 "traddr": "", 00:14:30.967 "trsvcid": "4421" 00:14:30.967 } 00:14:30.967 } 00:14:30.967 } 00:14:30.967 Got JSON-RPC error response 00:14:30.967 GoRPCClient: error on JSON-RPC call' 00:14:30.967 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/05/15 09:55:08 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:14:30.967 request: 00:14:30.967 { 00:14:30.967 "method": "nvmf_subsystem_remove_listener", 00:14:30.967 "params": { 00:14:30.967 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:30.967 "listen_address": { 00:14:30.967 "trtype": "tcp", 00:14:30.967 "traddr": "", 00:14:30.967 "trsvcid": "4421" 00:14:30.967 } 00:14:30.967 } 00:14:30.967 } 00:14:30.967 Got JSON-RPC error response 00:14:30.967 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:30.967 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4079 -i 0 00:14:31.226 [2024-05-15 09:55:08.545705] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4079: invalid cntlid range [0-65519] 00:14:31.226 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/05/15 09:55:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode4079], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:14:31.226 request: 00:14:31.226 { 00:14:31.226 "method": "nvmf_create_subsystem", 00:14:31.226 "params": { 00:14:31.226 "nqn": "nqn.2016-06.io.spdk:cnode4079", 00:14:31.226 "min_cntlid": 0 00:14:31.226 } 00:14:31.226 } 00:14:31.226 Got JSON-RPC error response 00:14:31.226 GoRPCClient: error on JSON-RPC call' 00:14:31.226 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/05/15 09:55:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode4079], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:14:31.226 request: 00:14:31.226 { 00:14:31.226 "method": "nvmf_create_subsystem", 00:14:31.226 "params": { 00:14:31.226 "nqn": "nqn.2016-06.io.spdk:cnode4079", 00:14:31.226 "min_cntlid": 0 00:14:31.226 } 00:14:31.226 } 00:14:31.226 Got JSON-RPC error response 00:14:31.226 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:31.226 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23728 -i 65520 00:14:31.484 [2024-05-15 09:55:08.810113] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23728: invalid cntlid range [65520-65519] 00:14:31.484 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/05/15 09:55:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23728], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:14:31.484 request: 00:14:31.484 { 00:14:31.484 "method": "nvmf_create_subsystem", 00:14:31.484 "params": { 00:14:31.484 "nqn": "nqn.2016-06.io.spdk:cnode23728", 00:14:31.484 "min_cntlid": 65520 00:14:31.484 } 00:14:31.484 } 00:14:31.484 Got JSON-RPC error response 00:14:31.484 GoRPCClient: error on JSON-RPC call' 00:14:31.484 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/05/15 09:55:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode23728], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:14:31.484 request: 00:14:31.484 { 00:14:31.484 "method": "nvmf_create_subsystem", 00:14:31.484 "params": { 00:14:31.484 "nqn": "nqn.2016-06.io.spdk:cnode23728", 00:14:31.485 "min_cntlid": 65520 00:14:31.485 } 00:14:31.485 } 00:14:31.485 Got JSON-RPC error response 00:14:31.485 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:31.485 09:55:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2773 -I 0 00:14:31.743 [2024-05-15 09:55:09.038435] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2773: invalid cntlid range [1-0] 00:14:31.743 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/05/15 09:55:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2773], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:14:31.743 request: 00:14:31.743 { 00:14:31.743 "method": "nvmf_create_subsystem", 00:14:31.743 "params": { 00:14:31.743 "nqn": "nqn.2016-06.io.spdk:cnode2773", 00:14:31.743 "max_cntlid": 0 00:14:31.743 } 00:14:31.743 } 00:14:31.743 Got JSON-RPC error response 00:14:31.743 GoRPCClient: error on JSON-RPC call' 00:14:31.743 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/05/15 09:55:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode2773], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:14:31.743 request: 00:14:31.743 { 00:14:31.743 "method": "nvmf_create_subsystem", 00:14:31.743 "params": { 00:14:31.743 "nqn": "nqn.2016-06.io.spdk:cnode2773", 00:14:31.743 "max_cntlid": 0 00:14:31.743 } 00:14:31.743 } 00:14:31.743 Got JSON-RPC error response 00:14:31.743 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:31.743 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19186 -I 65520 00:14:32.001 [2024-05-15 09:55:09.339658] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19186: invalid cntlid range [1-65520] 00:14:32.001 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/05/15 09:55:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode19186], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:14:32.001 request: 00:14:32.001 { 00:14:32.001 "method": "nvmf_create_subsystem", 00:14:32.001 "params": { 00:14:32.001 "nqn": "nqn.2016-06.io.spdk:cnode19186", 00:14:32.001 "max_cntlid": 65520 00:14:32.001 } 00:14:32.001 } 00:14:32.001 Got JSON-RPC error response 00:14:32.001 GoRPCClient: error on JSON-RPC call' 00:14:32.001 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/05/15 09:55:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode19186], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:14:32.001 request: 00:14:32.001 { 00:14:32.001 "method": "nvmf_create_subsystem", 00:14:32.001 "params": { 00:14:32.001 "nqn": "nqn.2016-06.io.spdk:cnode19186", 00:14:32.001 "max_cntlid": 65520 00:14:32.001 } 00:14:32.001 } 00:14:32.001 Got JSON-RPC error response 00:14:32.001 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:32.001 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28425 -i 6 -I 5 00:14:32.259 [2024-05-15 09:55:09.584093] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28425: invalid cntlid range [6-5] 00:14:32.259 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/05/15 09:55:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28425], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:14:32.259 request: 00:14:32.259 { 00:14:32.259 "method": "nvmf_create_subsystem", 00:14:32.259 "params": { 00:14:32.259 "nqn": "nqn.2016-06.io.spdk:cnode28425", 00:14:32.259 "min_cntlid": 6, 00:14:32.259 "max_cntlid": 5 00:14:32.259 } 00:14:32.259 } 00:14:32.259 Got JSON-RPC error response 00:14:32.259 GoRPCClient: error on JSON-RPC call' 00:14:32.259 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/05/15 09:55:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode28425], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:14:32.259 request: 00:14:32.259 { 00:14:32.259 "method": "nvmf_create_subsystem", 00:14:32.259 "params": { 00:14:32.259 "nqn": "nqn.2016-06.io.spdk:cnode28425", 00:14:32.259 "min_cntlid": 6, 00:14:32.259 "max_cntlid": 5 00:14:32.259 } 00:14:32.259 } 00:14:32.259 Got JSON-RPC error response 00:14:32.259 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:32.259 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:32.546 { 00:14:32.546 "name": "foobar", 00:14:32.546 "method": "nvmf_delete_target", 00:14:32.546 "req_id": 1 00:14:32.546 } 00:14:32.546 Got JSON-RPC error response 00:14:32.546 response: 00:14:32.546 { 00:14:32.546 "code": -32602, 00:14:32.546 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:32.546 }' 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:32.546 { 00:14:32.546 "name": "foobar", 00:14:32.546 "method": "nvmf_delete_target", 00:14:32.546 "req_id": 1 00:14:32.546 } 00:14:32.546 Got JSON-RPC error response 00:14:32.546 response: 00:14:32.546 { 00:14:32.546 "code": -32602, 00:14:32.546 "message": "The specified target doesn't exist, cannot delete it." 00:14:32.546 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.546 rmmod nvme_tcp 00:14:32.546 rmmod nvme_fabrics 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67435 ']' 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67435 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' -z 67435 ']' 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # kill -0 67435 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # uname 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 67435 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:14:32.546 killing process with pid 67435 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # echo 'killing process with pid 67435' 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # kill 67435 00:14:32.546 [2024-05-15 09:55:09.844224] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:32.546 09:55:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@971 -- # wait 67435 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:32.814 00:14:32.814 real 0m5.860s 00:14:32.814 user 0m22.556s 00:14:32.814 sys 0m1.551s 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:32.814 ************************************ 00:14:32.814 END TEST nvmf_invalid 00:14:32.814 ************************************ 00:14:32.814 09:55:10 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:32.814 09:55:10 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:32.814 09:55:10 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:32.814 09:55:10 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:32.814 09:55:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.814 ************************************ 00:14:32.814 START TEST nvmf_abort 00:14:32.814 ************************************ 00:14:32.814 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:33.074 * Looking for test storage... 00:14:33.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.074 09:55:10 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:33.075 Cannot find device "nvmf_tgt_br" 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.075 Cannot find device "nvmf_tgt_br2" 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:33.075 Cannot find device "nvmf_tgt_br" 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:33.075 Cannot find device "nvmf_tgt_br2" 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:14:33.075 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:33.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:14:33.334 00:14:33.334 --- 10.0.0.2 ping statistics --- 00:14:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.334 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:33.334 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:33.334 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:14:33.334 00:14:33.334 --- 10.0.0.3 ping statistics --- 00:14:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.334 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:14:33.334 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:33.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:14:33.593 00:14:33.593 --- 10.0.0.1 ping statistics --- 00:14:33.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.593 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=67931 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 67931 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 67931 ']' 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:33.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:33.593 09:55:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:33.593 [2024-05-15 09:55:10.812022] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:14:33.593 [2024-05-15 09:55:10.812154] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.593 [2024-05-15 09:55:10.960564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:33.851 [2024-05-15 09:55:11.139396] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.851 [2024-05-15 09:55:11.139464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.851 [2024-05-15 09:55:11.139480] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.851 [2024-05-15 09:55:11.139494] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.851 [2024-05-15 09:55:11.139505] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.851 [2024-05-15 09:55:11.140041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.851 [2024-05-15 09:55:11.140148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.851 [2024-05-15 09:55:11.140259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:34.785 [2024-05-15 09:55:11.952472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.785 09:55:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:34.785 Malloc0 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:34.785 Delay0 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:34.785 [2024-05-15 09:55:12.042607] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:34.785 [2024-05-15 09:55:12.043205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.785 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.786 09:55:12 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.786 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.786 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:34.786 09:55:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.786 09:55:12 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:35.044 [2024-05-15 09:55:12.243013] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:36.939 Initializing NVMe Controllers 00:14:36.939 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:36.939 controller IO queue size 128 less than required 00:14:36.939 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:36.939 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:36.939 Initialization complete. Launching workers. 00:14:36.939 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26575 00:14:36.939 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26636, failed to submit 62 00:14:36.939 success 26579, unsuccess 57, failed 0 00:14:36.939 09:55:14 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:36.939 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:36.939 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:36.939 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:36.939 09:55:14 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:36.939 09:55:14 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:36.939 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.939 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:37.505 rmmod nvme_tcp 00:14:37.505 rmmod nvme_fabrics 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 67931 ']' 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 67931 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 67931 ']' 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 67931 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 67931 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 67931' 00:14:37.505 killing process with pid 67931 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 67931 00:14:37.505 [2024-05-15 09:55:14.706470] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:37.505 09:55:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 67931 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:37.812 00:14:37.812 real 0m4.997s 00:14:37.812 user 0m13.558s 00:14:37.812 sys 0m1.395s 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:37.812 09:55:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:37.812 ************************************ 00:14:37.812 END TEST nvmf_abort 00:14:37.812 ************************************ 00:14:38.071 09:55:15 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:38.071 09:55:15 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:38.071 09:55:15 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:38.071 09:55:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.071 ************************************ 00:14:38.071 START TEST nvmf_ns_hotplug_stress 00:14:38.071 ************************************ 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:38.071 * Looking for test storage... 00:14:38.071 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.071 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:38.072 Cannot find device "nvmf_tgt_br" 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:38.072 Cannot find device "nvmf_tgt_br2" 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:38.072 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:38.330 Cannot find device "nvmf_tgt_br" 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:38.330 Cannot find device "nvmf_tgt_br2" 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:38.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:38.330 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:38.330 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:38.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:14:38.589 00:14:38.589 --- 10.0.0.2 ping statistics --- 00:14:38.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.589 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:38.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:38.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:14:38.589 00:14:38.589 --- 10.0.0.3 ping statistics --- 00:14:38.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.589 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:38.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:14:38.589 00:14:38.589 --- 10.0.0.1 ping statistics --- 00:14:38.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.589 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68205 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68205 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 68205 ']' 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:38.589 09:55:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.589 [2024-05-15 09:55:15.882586] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:14:38.589 [2024-05-15 09:55:15.882957] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.847 [2024-05-15 09:55:16.033304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:38.847 [2024-05-15 09:55:16.208051] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.847 [2024-05-15 09:55:16.208141] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.847 [2024-05-15 09:55:16.208157] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.847 [2024-05-15 09:55:16.208171] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.847 [2024-05-15 09:55:16.208182] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.847 [2024-05-15 09:55:16.209070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.847 [2024-05-15 09:55:16.209146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.847 [2024-05-15 09:55:16.209153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.783 09:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:39.783 09:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:14:39.783 09:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.783 09:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:39.783 09:55:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.783 09:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.783 09:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:39.783 09:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:40.041 [2024-05-15 09:55:17.296652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.041 09:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:40.347 09:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.621 [2024-05-15 09:55:17.865060] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:40.621 [2024-05-15 09:55:17.865423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.621 09:55:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:40.879 09:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:41.136 Malloc0 00:14:41.136 09:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:41.395 Delay0 00:14:41.653 09:55:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.911 09:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:42.170 NULL1 00:14:42.170 09:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:42.429 09:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68341 00:14:42.429 09:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:42.429 09:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:42.429 09:55:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.838 Read completed with error (sct=0, sc=11) 00:14:43.838 09:55:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:43.838 09:55:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:43.838 09:55:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:44.403 true 00:14:44.403 09:55:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:44.403 09:55:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.968 09:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.968 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:45.226 09:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:45.226 09:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:45.484 true 00:14:45.484 09:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:45.484 09:55:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.742 09:55:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.309 09:55:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:46.309 09:55:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:46.568 true 00:14:46.568 09:55:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:46.568 09:55:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.826 09:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.084 09:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:47.084 09:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:47.342 true 00:14:47.342 09:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:47.342 09:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.909 09:55:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.190 09:55:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:48.190 09:55:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:48.190 true 00:14:48.190 09:55:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:48.190 09:55:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.769 09:55:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.769 09:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:48.769 09:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:49.027 true 00:14:49.027 09:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:49.027 09:55:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.961 09:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.220 09:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:50.220 09:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:50.478 true 00:14:50.478 09:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:50.478 09:55:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.737 09:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.995 09:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:50.995 09:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:51.255 true 00:14:51.255 09:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:51.255 09:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.521 09:55:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.087 09:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:52.087 09:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:52.087 true 00:14:52.087 09:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:52.087 09:55:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.021 09:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.280 09:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:53.280 09:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:53.538 true 00:14:53.538 09:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:53.538 09:55:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.796 09:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.053 09:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:54.053 09:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:54.311 true 00:14:54.311 09:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:54.311 09:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.568 09:55:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.826 09:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:54.826 09:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:55.084 true 00:14:55.084 09:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:55.085 09:55:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.018 09:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.274 09:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:56.274 09:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:56.575 true 00:14:56.575 09:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:56.575 09:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.575 09:55:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.834 09:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:56.834 09:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:57.093 true 00:14:57.093 09:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:57.093 09:55:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.030 09:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.288 09:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:58.288 09:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:58.545 true 00:14:58.545 09:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:58.545 09:55:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.803 09:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.060 09:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:59.060 09:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:59.317 true 00:14:59.317 09:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:14:59.317 09:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.575 09:55:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.140 09:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:15:00.140 09:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:15:00.397 true 00:15:00.397 09:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:00.398 09:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.654 09:55:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.951 09:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:15:00.951 09:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:15:01.209 true 00:15:01.209 09:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:01.209 09:55:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.142 09:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.142 09:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:15:02.142 09:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:15:02.763 true 00:15:02.763 09:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:02.763 09:55:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.763 09:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:03.329 09:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:15:03.329 09:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:15:03.587 true 00:15:03.587 09:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:03.587 09:55:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.845 09:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:04.102 09:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:15:04.102 09:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:15:04.360 true 00:15:04.360 09:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:04.360 09:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.618 09:55:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.212 09:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:15:05.212 09:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:15:05.212 true 00:15:05.469 09:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:05.469 09:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.726 09:55:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:05.984 09:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:15:05.984 09:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:15:06.245 true 00:15:06.506 09:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:06.506 09:55:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:07.075 09:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:07.332 09:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:15:07.332 09:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:15:07.588 true 00:15:07.589 09:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:07.589 09:55:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.152 09:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:08.410 09:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:15:08.410 09:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:15:08.668 true 00:15:08.668 09:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:08.668 09:55:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:08.926 09:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:09.492 09:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:15:09.492 09:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:15:09.750 true 00:15:09.750 09:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:09.750 09:55:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.008 09:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:10.267 09:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:15:10.267 09:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:15:10.267 true 00:15:10.525 09:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:10.525 09:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:10.782 09:55:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:11.042 09:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:15:11.042 09:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:15:11.299 true 00:15:11.299 09:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:11.299 09:55:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.233 09:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:12.491 09:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:15:12.491 09:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:15:12.748 true 00:15:12.748 Initializing NVMe Controllers 00:15:12.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.748 Controller IO queue size 128, less than required. 00:15:12.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.748 Controller IO queue size 128, less than required. 00:15:12.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:12.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:12.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:12.748 Initialization complete. Launching workers. 00:15:12.748 ======================================================== 00:15:12.748 Latency(us) 00:15:12.748 Device Information : IOPS MiB/s Average min max 00:15:12.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 305.95 0.15 138005.21 2819.79 1038276.01 00:15:12.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7284.87 3.56 17571.84 4077.36 461484.19 00:15:12.748 ======================================================== 00:15:12.748 Total : 7590.82 3.71 22425.94 2819.79 1038276.01 00:15:12.748 00:15:12.748 09:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68341 00:15:12.748 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68341) - No such process 00:15:12.748 09:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68341 00:15:12.748 09:55:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.006 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.265 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:13.265 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:13.265 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:13.265 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:13.265 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:13.522 null0 00:15:13.522 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:13.522 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:13.522 09:55:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:13.780 null1 00:15:13.780 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:13.780 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:13.780 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:14.038 null2 00:15:14.038 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:14.038 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:14.038 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:14.296 null3 00:15:14.296 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:14.296 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:14.296 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:14.554 null4 00:15:14.554 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:14.554 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:14.554 09:55:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:14.812 null5 00:15:14.812 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:14.812 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:14.812 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:15.070 null6 00:15:15.070 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:15.070 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:15.070 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:15.636 null7 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.636 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69377 69378 69380 69382 69384 69385 69388 69390 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:15.637 09:55:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:15.895 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:15.895 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.895 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:15.895 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:15.895 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:15.895 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:15.895 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:15.895 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.153 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:16.410 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:16.673 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:16.673 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.673 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.673 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:16.673 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:16.673 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.673 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.673 09:55:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.962 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.963 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:16.963 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.963 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:16.963 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:16.963 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:16.963 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:17.221 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:17.221 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:17.221 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:17.221 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:17.221 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.221 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.221 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.221 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.477 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:17.734 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:17.734 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:17.734 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:17.734 09:55:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:17.734 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:17.734 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.734 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.734 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:17.992 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:18.250 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.507 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:18.765 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.765 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.765 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:18.765 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.765 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.765 09:55:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:18.765 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.052 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.310 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:19.568 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:19.827 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.827 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.827 09:55:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:19.827 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.084 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:20.341 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:20.599 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:20.857 09:55:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:20.857 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:21.143 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:21.144 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.416 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.674 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.674 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.674 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:21.674 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.674 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.674 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.674 09:55:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.674 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:21.933 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:21.933 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:21.933 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:21.933 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:21.933 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.933 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:22.191 rmmod nvme_tcp 00:15:22.191 rmmod nvme_fabrics 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68205 ']' 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68205 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 68205 ']' 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 68205 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 68205 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 68205' 00:15:22.191 killing process with pid 68205 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 68205 00:15:22.191 [2024-05-15 09:55:59.397486] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:22.191 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 68205 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:22.451 00:15:22.451 real 0m44.574s 00:15:22.451 user 3m33.336s 00:15:22.451 sys 0m17.318s 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:22.451 09:55:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.451 ************************************ 00:15:22.451 END TEST nvmf_ns_hotplug_stress 00:15:22.451 ************************************ 00:15:22.710 09:55:59 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:22.710 09:55:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:22.710 09:55:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:22.710 09:55:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:22.710 ************************************ 00:15:22.710 START TEST nvmf_connect_stress 00:15:22.710 ************************************ 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:22.710 * Looking for test storage... 00:15:22.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.710 09:55:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.710 09:56:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:22.711 Cannot find device "nvmf_tgt_br" 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:22.711 Cannot find device "nvmf_tgt_br2" 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:22.711 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:22.968 Cannot find device "nvmf_tgt_br" 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:22.968 Cannot find device "nvmf_tgt_br2" 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:22.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:22.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:22.968 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:23.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:15:23.257 00:15:23.257 --- 10.0.0.2 ping statistics --- 00:15:23.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.257 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:23.257 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:23.257 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:15:23.257 00:15:23.257 --- 10.0.0.3 ping statistics --- 00:15:23.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.257 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:23.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:23.257 00:15:23.257 --- 10.0.0.1 ping statistics --- 00:15:23.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.257 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=70758 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 70758 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 70758 ']' 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:23.257 09:56:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.257 [2024-05-15 09:56:00.506405] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:15:23.257 [2024-05-15 09:56:00.507114] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.514 [2024-05-15 09:56:00.652137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:23.514 [2024-05-15 09:56:00.827630] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.514 [2024-05-15 09:56:00.827716] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.514 [2024-05-15 09:56:00.827732] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.514 [2024-05-15 09:56:00.827746] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.515 [2024-05-15 09:56:00.827758] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.515 [2024-05-15 09:56:00.828746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.515 [2024-05-15 09:56:00.828822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.515 [2024-05-15 09:56:00.828829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.080 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:24.080 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:15:24.080 09:56:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.337 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:24.337 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.337 09:56:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.338 [2024-05-15 09:56:01.521314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.338 [2024-05-15 09:56:01.545233] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:24.338 [2024-05-15 09:56:01.545958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.338 NULL1 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=70810 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.338 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.902 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.902 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:24.902 09:56:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.902 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.902 09:56:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.161 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.161 09:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:25.161 09:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.161 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.161 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.418 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.418 09:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:25.418 09:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.418 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.418 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.676 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.676 09:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:25.676 09:56:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.676 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.676 09:56:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.940 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.940 09:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:25.940 09:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.940 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.940 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.246 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.246 09:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:26.246 09:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.246 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.246 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.826 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:26.826 09:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:26.826 09:56:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.826 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:26.826 09:56:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.084 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.084 09:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:27.084 09:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.084 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.084 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.340 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.340 09:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:27.340 09:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.340 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.340 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.598 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.598 09:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:27.598 09:56:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.598 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.598 09:56:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.855 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:27.855 09:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:27.855 09:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.855 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:27.855 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.422 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:28.422 09:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:28.422 09:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.422 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:28.422 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.684 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:28.684 09:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:28.684 09:56:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.684 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:28.684 09:56:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.941 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:28.941 09:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:28.941 09:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.941 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:28.941 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.197 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.197 09:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:29.197 09:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.197 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.197 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.454 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:29.454 09:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:29.454 09:56:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.455 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:29.455 09:56:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.018 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.018 09:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:30.018 09:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.018 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.018 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.276 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.276 09:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:30.276 09:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.276 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.276 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.533 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.533 09:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:30.533 09:56:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.533 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.533 09:56:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.791 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:30.791 09:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:30.791 09:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.791 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:30.791 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.364 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.364 09:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:31.364 09:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.364 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.364 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.621 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.621 09:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:31.621 09:56:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.621 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.621 09:56:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.878 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:31.878 09:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:31.878 09:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.878 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:31.878 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.136 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:32.136 09:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:32.136 09:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.136 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:32.136 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.393 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:32.393 09:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:32.393 09:56:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.393 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:32.393 09:56:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.959 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:32.959 09:56:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:32.959 09:56:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.959 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:32.959 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.217 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.217 09:56:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:33.217 09:56:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.217 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.217 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.519 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.519 09:56:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:33.519 09:56:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.519 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.519 09:56:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.777 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:33.777 09:56:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:33.777 09:56:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.777 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:33.777 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.035 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.035 09:56:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:34.035 09:56:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.035 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.035 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.600 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.600 09:56:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:34.600 09:56:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.600 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:34.600 09:56:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.600 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 70810 00:15:34.858 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (70810) - No such process 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 70810 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.858 rmmod nvme_tcp 00:15:34.858 rmmod nvme_fabrics 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 70758 ']' 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 70758 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 70758 ']' 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 70758 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 70758 00:15:34.858 killing process with pid 70758 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 70758' 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 70758 00:15:34.858 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 70758 00:15:34.858 [2024-05-15 09:56:12.138521] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:35.424 00:15:35.424 real 0m12.685s 00:15:35.424 user 0m40.574s 00:15:35.424 sys 0m4.326s 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:35.424 09:56:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.424 ************************************ 00:15:35.424 END TEST nvmf_connect_stress 00:15:35.424 ************************************ 00:15:35.424 09:56:12 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:35.424 09:56:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:35.424 09:56:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:35.424 09:56:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:35.424 ************************************ 00:15:35.424 START TEST nvmf_fused_ordering 00:15:35.424 ************************************ 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:35.424 * Looking for test storage... 00:15:35.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.424 09:56:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:35.425 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:35.683 Cannot find device "nvmf_tgt_br" 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.683 Cannot find device "nvmf_tgt_br2" 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:35.683 Cannot find device "nvmf_tgt_br" 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:35.683 Cannot find device "nvmf_tgt_br2" 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:35.683 09:56:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:35.683 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:35.683 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:35.683 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:35.683 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:35.683 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:35.683 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:35.683 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:35.683 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.953 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:35.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:35.953 00:15:35.953 --- 10.0.0.2 ping statistics --- 00:15:35.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.954 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:35.954 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.954 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:35.954 00:15:35.954 --- 10.0.0.3 ping statistics --- 00:15:35.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.954 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.954 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.954 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:35.954 00:15:35.954 --- 10.0.0.1 ping statistics --- 00:15:35.954 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.954 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71135 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:35.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71135 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 71135 ']' 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:35.954 09:56:13 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.954 [2024-05-15 09:56:13.234117] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:15:35.954 [2024-05-15 09:56:13.234486] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.213 [2024-05-15 09:56:13.374760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.213 [2024-05-15 09:56:13.537585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.213 [2024-05-15 09:56:13.537868] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.213 [2024-05-15 09:56:13.537987] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.213 [2024-05-15 09:56:13.538044] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.213 [2024-05-15 09:56:13.538149] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.213 [2024-05-15 09:56:13.538224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:37.146 [2024-05-15 09:56:14.240016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.146 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:37.146 [2024-05-15 09:56:14.263945] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:37.147 [2024-05-15 09:56:14.264546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:37.147 NULL1 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:37.147 09:56:14 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:37.147 [2024-05-15 09:56:14.326983] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:15:37.147 [2024-05-15 09:56:14.327354] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71186 ] 00:15:37.404 Attached to nqn.2016-06.io.spdk:cnode1 00:15:37.404 Namespace ID: 1 size: 1GB 00:15:37.404 fused_ordering(0) 00:15:37.404 fused_ordering(1) 00:15:37.404 fused_ordering(2) 00:15:37.404 fused_ordering(3) 00:15:37.404 fused_ordering(4) 00:15:37.404 fused_ordering(5) 00:15:37.404 fused_ordering(6) 00:15:37.404 fused_ordering(7) 00:15:37.404 fused_ordering(8) 00:15:37.404 fused_ordering(9) 00:15:37.404 fused_ordering(10) 00:15:37.404 fused_ordering(11) 00:15:37.404 fused_ordering(12) 00:15:37.404 fused_ordering(13) 00:15:37.404 fused_ordering(14) 00:15:37.404 fused_ordering(15) 00:15:37.404 fused_ordering(16) 00:15:37.404 fused_ordering(17) 00:15:37.404 fused_ordering(18) 00:15:37.404 fused_ordering(19) 00:15:37.404 fused_ordering(20) 00:15:37.404 fused_ordering(21) 00:15:37.404 fused_ordering(22) 00:15:37.404 fused_ordering(23) 00:15:37.404 fused_ordering(24) 00:15:37.404 fused_ordering(25) 00:15:37.404 fused_ordering(26) 00:15:37.404 fused_ordering(27) 00:15:37.404 fused_ordering(28) 00:15:37.404 fused_ordering(29) 00:15:37.405 fused_ordering(30) 00:15:37.405 fused_ordering(31) 00:15:37.405 fused_ordering(32) 00:15:37.405 fused_ordering(33) 00:15:37.405 fused_ordering(34) 00:15:37.405 fused_ordering(35) 00:15:37.405 fused_ordering(36) 00:15:37.405 fused_ordering(37) 00:15:37.405 fused_ordering(38) 00:15:37.405 fused_ordering(39) 00:15:37.405 fused_ordering(40) 00:15:37.405 fused_ordering(41) 00:15:37.405 fused_ordering(42) 00:15:37.405 fused_ordering(43) 00:15:37.405 fused_ordering(44) 00:15:37.405 fused_ordering(45) 00:15:37.405 fused_ordering(46) 00:15:37.405 fused_ordering(47) 00:15:37.405 fused_ordering(48) 00:15:37.405 fused_ordering(49) 00:15:37.405 fused_ordering(50) 00:15:37.405 fused_ordering(51) 00:15:37.405 fused_ordering(52) 00:15:37.405 fused_ordering(53) 00:15:37.405 fused_ordering(54) 00:15:37.405 fused_ordering(55) 00:15:37.405 fused_ordering(56) 00:15:37.405 fused_ordering(57) 00:15:37.405 fused_ordering(58) 00:15:37.405 fused_ordering(59) 00:15:37.405 fused_ordering(60) 00:15:37.405 fused_ordering(61) 00:15:37.405 fused_ordering(62) 00:15:37.405 fused_ordering(63) 00:15:37.405 fused_ordering(64) 00:15:37.405 fused_ordering(65) 00:15:37.405 fused_ordering(66) 00:15:37.405 fused_ordering(67) 00:15:37.405 fused_ordering(68) 00:15:37.405 fused_ordering(69) 00:15:37.405 fused_ordering(70) 00:15:37.405 fused_ordering(71) 00:15:37.405 fused_ordering(72) 00:15:37.405 fused_ordering(73) 00:15:37.405 fused_ordering(74) 00:15:37.405 fused_ordering(75) 00:15:37.405 fused_ordering(76) 00:15:37.405 fused_ordering(77) 00:15:37.405 fused_ordering(78) 00:15:37.405 fused_ordering(79) 00:15:37.405 fused_ordering(80) 00:15:37.405 fused_ordering(81) 00:15:37.405 fused_ordering(82) 00:15:37.405 fused_ordering(83) 00:15:37.405 fused_ordering(84) 00:15:37.405 fused_ordering(85) 00:15:37.405 fused_ordering(86) 00:15:37.405 fused_ordering(87) 00:15:37.405 fused_ordering(88) 00:15:37.405 fused_ordering(89) 00:15:37.405 fused_ordering(90) 00:15:37.405 fused_ordering(91) 00:15:37.405 fused_ordering(92) 00:15:37.405 fused_ordering(93) 00:15:37.405 fused_ordering(94) 00:15:37.405 fused_ordering(95) 00:15:37.405 fused_ordering(96) 00:15:37.405 fused_ordering(97) 00:15:37.405 fused_ordering(98) 00:15:37.405 fused_ordering(99) 00:15:37.405 fused_ordering(100) 00:15:37.405 fused_ordering(101) 00:15:37.405 fused_ordering(102) 00:15:37.405 fused_ordering(103) 00:15:37.405 fused_ordering(104) 00:15:37.405 fused_ordering(105) 00:15:37.405 fused_ordering(106) 00:15:37.405 fused_ordering(107) 00:15:37.405 fused_ordering(108) 00:15:37.405 fused_ordering(109) 00:15:37.405 fused_ordering(110) 00:15:37.405 fused_ordering(111) 00:15:37.405 fused_ordering(112) 00:15:37.405 fused_ordering(113) 00:15:37.405 fused_ordering(114) 00:15:37.405 fused_ordering(115) 00:15:37.405 fused_ordering(116) 00:15:37.405 fused_ordering(117) 00:15:37.405 fused_ordering(118) 00:15:37.405 fused_ordering(119) 00:15:37.405 fused_ordering(120) 00:15:37.405 fused_ordering(121) 00:15:37.405 fused_ordering(122) 00:15:37.405 fused_ordering(123) 00:15:37.405 fused_ordering(124) 00:15:37.405 fused_ordering(125) 00:15:37.405 fused_ordering(126) 00:15:37.405 fused_ordering(127) 00:15:37.405 fused_ordering(128) 00:15:37.405 fused_ordering(129) 00:15:37.405 fused_ordering(130) 00:15:37.405 fused_ordering(131) 00:15:37.405 fused_ordering(132) 00:15:37.405 fused_ordering(133) 00:15:37.405 fused_ordering(134) 00:15:37.405 fused_ordering(135) 00:15:37.405 fused_ordering(136) 00:15:37.405 fused_ordering(137) 00:15:37.405 fused_ordering(138) 00:15:37.405 fused_ordering(139) 00:15:37.405 fused_ordering(140) 00:15:37.405 fused_ordering(141) 00:15:37.405 fused_ordering(142) 00:15:37.405 fused_ordering(143) 00:15:37.405 fused_ordering(144) 00:15:37.405 fused_ordering(145) 00:15:37.405 fused_ordering(146) 00:15:37.405 fused_ordering(147) 00:15:37.405 fused_ordering(148) 00:15:37.405 fused_ordering(149) 00:15:37.405 fused_ordering(150) 00:15:37.405 fused_ordering(151) 00:15:37.405 fused_ordering(152) 00:15:37.405 fused_ordering(153) 00:15:37.405 fused_ordering(154) 00:15:37.405 fused_ordering(155) 00:15:37.405 fused_ordering(156) 00:15:37.405 fused_ordering(157) 00:15:37.405 fused_ordering(158) 00:15:37.405 fused_ordering(159) 00:15:37.405 fused_ordering(160) 00:15:37.405 fused_ordering(161) 00:15:37.405 fused_ordering(162) 00:15:37.405 fused_ordering(163) 00:15:37.405 fused_ordering(164) 00:15:37.405 fused_ordering(165) 00:15:37.405 fused_ordering(166) 00:15:37.405 fused_ordering(167) 00:15:37.405 fused_ordering(168) 00:15:37.405 fused_ordering(169) 00:15:37.405 fused_ordering(170) 00:15:37.405 fused_ordering(171) 00:15:37.405 fused_ordering(172) 00:15:37.405 fused_ordering(173) 00:15:37.405 fused_ordering(174) 00:15:37.405 fused_ordering(175) 00:15:37.405 fused_ordering(176) 00:15:37.405 fused_ordering(177) 00:15:37.405 fused_ordering(178) 00:15:37.405 fused_ordering(179) 00:15:37.405 fused_ordering(180) 00:15:37.405 fused_ordering(181) 00:15:37.405 fused_ordering(182) 00:15:37.405 fused_ordering(183) 00:15:37.405 fused_ordering(184) 00:15:37.405 fused_ordering(185) 00:15:37.405 fused_ordering(186) 00:15:37.405 fused_ordering(187) 00:15:37.405 fused_ordering(188) 00:15:37.405 fused_ordering(189) 00:15:37.405 fused_ordering(190) 00:15:37.405 fused_ordering(191) 00:15:37.405 fused_ordering(192) 00:15:37.405 fused_ordering(193) 00:15:37.405 fused_ordering(194) 00:15:37.405 fused_ordering(195) 00:15:37.405 fused_ordering(196) 00:15:37.405 fused_ordering(197) 00:15:37.405 fused_ordering(198) 00:15:37.405 fused_ordering(199) 00:15:37.405 fused_ordering(200) 00:15:37.405 fused_ordering(201) 00:15:37.405 fused_ordering(202) 00:15:37.405 fused_ordering(203) 00:15:37.405 fused_ordering(204) 00:15:37.405 fused_ordering(205) 00:15:37.664 fused_ordering(206) 00:15:37.664 fused_ordering(207) 00:15:37.664 fused_ordering(208) 00:15:37.664 fused_ordering(209) 00:15:37.664 fused_ordering(210) 00:15:37.664 fused_ordering(211) 00:15:37.664 fused_ordering(212) 00:15:37.664 fused_ordering(213) 00:15:37.664 fused_ordering(214) 00:15:37.664 fused_ordering(215) 00:15:37.664 fused_ordering(216) 00:15:37.664 fused_ordering(217) 00:15:37.664 fused_ordering(218) 00:15:37.664 fused_ordering(219) 00:15:37.664 fused_ordering(220) 00:15:37.664 fused_ordering(221) 00:15:37.664 fused_ordering(222) 00:15:37.664 fused_ordering(223) 00:15:37.664 fused_ordering(224) 00:15:37.664 fused_ordering(225) 00:15:37.664 fused_ordering(226) 00:15:37.664 fused_ordering(227) 00:15:37.664 fused_ordering(228) 00:15:37.664 fused_ordering(229) 00:15:37.664 fused_ordering(230) 00:15:37.664 fused_ordering(231) 00:15:37.664 fused_ordering(232) 00:15:37.664 fused_ordering(233) 00:15:37.664 fused_ordering(234) 00:15:37.664 fused_ordering(235) 00:15:37.664 fused_ordering(236) 00:15:37.664 fused_ordering(237) 00:15:37.664 fused_ordering(238) 00:15:37.664 fused_ordering(239) 00:15:37.664 fused_ordering(240) 00:15:37.664 fused_ordering(241) 00:15:37.664 fused_ordering(242) 00:15:37.664 fused_ordering(243) 00:15:37.664 fused_ordering(244) 00:15:37.664 fused_ordering(245) 00:15:37.664 fused_ordering(246) 00:15:37.664 fused_ordering(247) 00:15:37.664 fused_ordering(248) 00:15:37.664 fused_ordering(249) 00:15:37.664 fused_ordering(250) 00:15:37.664 fused_ordering(251) 00:15:37.664 fused_ordering(252) 00:15:37.664 fused_ordering(253) 00:15:37.664 fused_ordering(254) 00:15:37.664 fused_ordering(255) 00:15:37.664 fused_ordering(256) 00:15:37.664 fused_ordering(257) 00:15:37.664 fused_ordering(258) 00:15:37.664 fused_ordering(259) 00:15:37.664 fused_ordering(260) 00:15:37.664 fused_ordering(261) 00:15:37.664 fused_ordering(262) 00:15:37.664 fused_ordering(263) 00:15:37.664 fused_ordering(264) 00:15:37.664 fused_ordering(265) 00:15:37.664 fused_ordering(266) 00:15:37.664 fused_ordering(267) 00:15:37.664 fused_ordering(268) 00:15:37.664 fused_ordering(269) 00:15:37.664 fused_ordering(270) 00:15:37.664 fused_ordering(271) 00:15:37.664 fused_ordering(272) 00:15:37.664 fused_ordering(273) 00:15:37.664 fused_ordering(274) 00:15:37.664 fused_ordering(275) 00:15:37.664 fused_ordering(276) 00:15:37.664 fused_ordering(277) 00:15:37.664 fused_ordering(278) 00:15:37.664 fused_ordering(279) 00:15:37.664 fused_ordering(280) 00:15:37.664 fused_ordering(281) 00:15:37.664 fused_ordering(282) 00:15:37.664 fused_ordering(283) 00:15:37.664 fused_ordering(284) 00:15:37.664 fused_ordering(285) 00:15:37.664 fused_ordering(286) 00:15:37.664 fused_ordering(287) 00:15:37.664 fused_ordering(288) 00:15:37.664 fused_ordering(289) 00:15:37.664 fused_ordering(290) 00:15:37.664 fused_ordering(291) 00:15:37.664 fused_ordering(292) 00:15:37.664 fused_ordering(293) 00:15:37.664 fused_ordering(294) 00:15:37.664 fused_ordering(295) 00:15:37.664 fused_ordering(296) 00:15:37.664 fused_ordering(297) 00:15:37.664 fused_ordering(298) 00:15:37.664 fused_ordering(299) 00:15:37.664 fused_ordering(300) 00:15:37.664 fused_ordering(301) 00:15:37.664 fused_ordering(302) 00:15:37.664 fused_ordering(303) 00:15:37.664 fused_ordering(304) 00:15:37.664 fused_ordering(305) 00:15:37.664 fused_ordering(306) 00:15:37.664 fused_ordering(307) 00:15:37.664 fused_ordering(308) 00:15:37.664 fused_ordering(309) 00:15:37.664 fused_ordering(310) 00:15:37.664 fused_ordering(311) 00:15:37.664 fused_ordering(312) 00:15:37.664 fused_ordering(313) 00:15:37.664 fused_ordering(314) 00:15:37.664 fused_ordering(315) 00:15:37.664 fused_ordering(316) 00:15:37.664 fused_ordering(317) 00:15:37.664 fused_ordering(318) 00:15:37.664 fused_ordering(319) 00:15:37.664 fused_ordering(320) 00:15:37.664 fused_ordering(321) 00:15:37.664 fused_ordering(322) 00:15:37.664 fused_ordering(323) 00:15:37.664 fused_ordering(324) 00:15:37.664 fused_ordering(325) 00:15:37.664 fused_ordering(326) 00:15:37.664 fused_ordering(327) 00:15:37.664 fused_ordering(328) 00:15:37.664 fused_ordering(329) 00:15:37.664 fused_ordering(330) 00:15:37.664 fused_ordering(331) 00:15:37.664 fused_ordering(332) 00:15:37.664 fused_ordering(333) 00:15:37.664 fused_ordering(334) 00:15:37.664 fused_ordering(335) 00:15:37.664 fused_ordering(336) 00:15:37.664 fused_ordering(337) 00:15:37.664 fused_ordering(338) 00:15:37.664 fused_ordering(339) 00:15:37.664 fused_ordering(340) 00:15:37.664 fused_ordering(341) 00:15:37.664 fused_ordering(342) 00:15:37.664 fused_ordering(343) 00:15:37.664 fused_ordering(344) 00:15:37.664 fused_ordering(345) 00:15:37.664 fused_ordering(346) 00:15:37.664 fused_ordering(347) 00:15:37.664 fused_ordering(348) 00:15:37.664 fused_ordering(349) 00:15:37.664 fused_ordering(350) 00:15:37.664 fused_ordering(351) 00:15:37.664 fused_ordering(352) 00:15:37.664 fused_ordering(353) 00:15:37.664 fused_ordering(354) 00:15:37.664 fused_ordering(355) 00:15:37.664 fused_ordering(356) 00:15:37.664 fused_ordering(357) 00:15:37.664 fused_ordering(358) 00:15:37.664 fused_ordering(359) 00:15:37.664 fused_ordering(360) 00:15:37.664 fused_ordering(361) 00:15:37.664 fused_ordering(362) 00:15:37.664 fused_ordering(363) 00:15:37.664 fused_ordering(364) 00:15:37.664 fused_ordering(365) 00:15:37.664 fused_ordering(366) 00:15:37.664 fused_ordering(367) 00:15:37.664 fused_ordering(368) 00:15:37.664 fused_ordering(369) 00:15:37.664 fused_ordering(370) 00:15:37.664 fused_ordering(371) 00:15:37.664 fused_ordering(372) 00:15:37.664 fused_ordering(373) 00:15:37.664 fused_ordering(374) 00:15:37.664 fused_ordering(375) 00:15:37.664 fused_ordering(376) 00:15:37.664 fused_ordering(377) 00:15:37.664 fused_ordering(378) 00:15:37.664 fused_ordering(379) 00:15:37.664 fused_ordering(380) 00:15:37.664 fused_ordering(381) 00:15:37.664 fused_ordering(382) 00:15:37.664 fused_ordering(383) 00:15:37.664 fused_ordering(384) 00:15:37.664 fused_ordering(385) 00:15:37.664 fused_ordering(386) 00:15:37.664 fused_ordering(387) 00:15:37.664 fused_ordering(388) 00:15:37.664 fused_ordering(389) 00:15:37.664 fused_ordering(390) 00:15:37.664 fused_ordering(391) 00:15:37.664 fused_ordering(392) 00:15:37.664 fused_ordering(393) 00:15:37.664 fused_ordering(394) 00:15:37.664 fused_ordering(395) 00:15:37.664 fused_ordering(396) 00:15:37.664 fused_ordering(397) 00:15:37.664 fused_ordering(398) 00:15:37.664 fused_ordering(399) 00:15:37.664 fused_ordering(400) 00:15:37.664 fused_ordering(401) 00:15:37.664 fused_ordering(402) 00:15:37.664 fused_ordering(403) 00:15:37.664 fused_ordering(404) 00:15:37.664 fused_ordering(405) 00:15:37.664 fused_ordering(406) 00:15:37.664 fused_ordering(407) 00:15:37.664 fused_ordering(408) 00:15:37.664 fused_ordering(409) 00:15:37.664 fused_ordering(410) 00:15:38.229 fused_ordering(411) 00:15:38.229 fused_ordering(412) 00:15:38.229 fused_ordering(413) 00:15:38.229 fused_ordering(414) 00:15:38.229 fused_ordering(415) 00:15:38.229 fused_ordering(416) 00:15:38.229 fused_ordering(417) 00:15:38.229 fused_ordering(418) 00:15:38.229 fused_ordering(419) 00:15:38.229 fused_ordering(420) 00:15:38.229 fused_ordering(421) 00:15:38.229 fused_ordering(422) 00:15:38.229 fused_ordering(423) 00:15:38.229 fused_ordering(424) 00:15:38.229 fused_ordering(425) 00:15:38.229 fused_ordering(426) 00:15:38.229 fused_ordering(427) 00:15:38.229 fused_ordering(428) 00:15:38.229 fused_ordering(429) 00:15:38.229 fused_ordering(430) 00:15:38.229 fused_ordering(431) 00:15:38.229 fused_ordering(432) 00:15:38.229 fused_ordering(433) 00:15:38.229 fused_ordering(434) 00:15:38.229 fused_ordering(435) 00:15:38.229 fused_ordering(436) 00:15:38.229 fused_ordering(437) 00:15:38.229 fused_ordering(438) 00:15:38.229 fused_ordering(439) 00:15:38.229 fused_ordering(440) 00:15:38.229 fused_ordering(441) 00:15:38.229 fused_ordering(442) 00:15:38.229 fused_ordering(443) 00:15:38.229 fused_ordering(444) 00:15:38.229 fused_ordering(445) 00:15:38.229 fused_ordering(446) 00:15:38.229 fused_ordering(447) 00:15:38.229 fused_ordering(448) 00:15:38.229 fused_ordering(449) 00:15:38.229 fused_ordering(450) 00:15:38.229 fused_ordering(451) 00:15:38.229 fused_ordering(452) 00:15:38.229 fused_ordering(453) 00:15:38.229 fused_ordering(454) 00:15:38.229 fused_ordering(455) 00:15:38.229 fused_ordering(456) 00:15:38.229 fused_ordering(457) 00:15:38.229 fused_ordering(458) 00:15:38.229 fused_ordering(459) 00:15:38.229 fused_ordering(460) 00:15:38.229 fused_ordering(461) 00:15:38.229 fused_ordering(462) 00:15:38.229 fused_ordering(463) 00:15:38.229 fused_ordering(464) 00:15:38.229 fused_ordering(465) 00:15:38.229 fused_ordering(466) 00:15:38.229 fused_ordering(467) 00:15:38.229 fused_ordering(468) 00:15:38.229 fused_ordering(469) 00:15:38.229 fused_ordering(470) 00:15:38.229 fused_ordering(471) 00:15:38.229 fused_ordering(472) 00:15:38.229 fused_ordering(473) 00:15:38.229 fused_ordering(474) 00:15:38.229 fused_ordering(475) 00:15:38.229 fused_ordering(476) 00:15:38.229 fused_ordering(477) 00:15:38.230 fused_ordering(478) 00:15:38.230 fused_ordering(479) 00:15:38.230 fused_ordering(480) 00:15:38.230 fused_ordering(481) 00:15:38.230 fused_ordering(482) 00:15:38.230 fused_ordering(483) 00:15:38.230 fused_ordering(484) 00:15:38.230 fused_ordering(485) 00:15:38.230 fused_ordering(486) 00:15:38.230 fused_ordering(487) 00:15:38.230 fused_ordering(488) 00:15:38.230 fused_ordering(489) 00:15:38.230 fused_ordering(490) 00:15:38.230 fused_ordering(491) 00:15:38.230 fused_ordering(492) 00:15:38.230 fused_ordering(493) 00:15:38.230 fused_ordering(494) 00:15:38.230 fused_ordering(495) 00:15:38.230 fused_ordering(496) 00:15:38.230 fused_ordering(497) 00:15:38.230 fused_ordering(498) 00:15:38.230 fused_ordering(499) 00:15:38.230 fused_ordering(500) 00:15:38.230 fused_ordering(501) 00:15:38.230 fused_ordering(502) 00:15:38.230 fused_ordering(503) 00:15:38.230 fused_ordering(504) 00:15:38.230 fused_ordering(505) 00:15:38.230 fused_ordering(506) 00:15:38.230 fused_ordering(507) 00:15:38.230 fused_ordering(508) 00:15:38.230 fused_ordering(509) 00:15:38.230 fused_ordering(510) 00:15:38.230 fused_ordering(511) 00:15:38.230 fused_ordering(512) 00:15:38.230 fused_ordering(513) 00:15:38.230 fused_ordering(514) 00:15:38.230 fused_ordering(515) 00:15:38.230 fused_ordering(516) 00:15:38.230 fused_ordering(517) 00:15:38.230 fused_ordering(518) 00:15:38.230 fused_ordering(519) 00:15:38.230 fused_ordering(520) 00:15:38.230 fused_ordering(521) 00:15:38.230 fused_ordering(522) 00:15:38.230 fused_ordering(523) 00:15:38.230 fused_ordering(524) 00:15:38.230 fused_ordering(525) 00:15:38.230 fused_ordering(526) 00:15:38.230 fused_ordering(527) 00:15:38.230 fused_ordering(528) 00:15:38.230 fused_ordering(529) 00:15:38.230 fused_ordering(530) 00:15:38.230 fused_ordering(531) 00:15:38.230 fused_ordering(532) 00:15:38.230 fused_ordering(533) 00:15:38.230 fused_ordering(534) 00:15:38.230 fused_ordering(535) 00:15:38.230 fused_ordering(536) 00:15:38.230 fused_ordering(537) 00:15:38.230 fused_ordering(538) 00:15:38.230 fused_ordering(539) 00:15:38.230 fused_ordering(540) 00:15:38.230 fused_ordering(541) 00:15:38.230 fused_ordering(542) 00:15:38.230 fused_ordering(543) 00:15:38.230 fused_ordering(544) 00:15:38.230 fused_ordering(545) 00:15:38.230 fused_ordering(546) 00:15:38.230 fused_ordering(547) 00:15:38.230 fused_ordering(548) 00:15:38.230 fused_ordering(549) 00:15:38.230 fused_ordering(550) 00:15:38.230 fused_ordering(551) 00:15:38.230 fused_ordering(552) 00:15:38.230 fused_ordering(553) 00:15:38.230 fused_ordering(554) 00:15:38.230 fused_ordering(555) 00:15:38.230 fused_ordering(556) 00:15:38.230 fused_ordering(557) 00:15:38.230 fused_ordering(558) 00:15:38.230 fused_ordering(559) 00:15:38.230 fused_ordering(560) 00:15:38.230 fused_ordering(561) 00:15:38.230 fused_ordering(562) 00:15:38.230 fused_ordering(563) 00:15:38.230 fused_ordering(564) 00:15:38.230 fused_ordering(565) 00:15:38.230 fused_ordering(566) 00:15:38.230 fused_ordering(567) 00:15:38.230 fused_ordering(568) 00:15:38.230 fused_ordering(569) 00:15:38.230 fused_ordering(570) 00:15:38.230 fused_ordering(571) 00:15:38.230 fused_ordering(572) 00:15:38.230 fused_ordering(573) 00:15:38.230 fused_ordering(574) 00:15:38.230 fused_ordering(575) 00:15:38.230 fused_ordering(576) 00:15:38.230 fused_ordering(577) 00:15:38.230 fused_ordering(578) 00:15:38.230 fused_ordering(579) 00:15:38.230 fused_ordering(580) 00:15:38.230 fused_ordering(581) 00:15:38.230 fused_ordering(582) 00:15:38.230 fused_ordering(583) 00:15:38.230 fused_ordering(584) 00:15:38.230 fused_ordering(585) 00:15:38.230 fused_ordering(586) 00:15:38.230 fused_ordering(587) 00:15:38.230 fused_ordering(588) 00:15:38.230 fused_ordering(589) 00:15:38.230 fused_ordering(590) 00:15:38.230 fused_ordering(591) 00:15:38.230 fused_ordering(592) 00:15:38.230 fused_ordering(593) 00:15:38.230 fused_ordering(594) 00:15:38.230 fused_ordering(595) 00:15:38.230 fused_ordering(596) 00:15:38.230 fused_ordering(597) 00:15:38.230 fused_ordering(598) 00:15:38.230 fused_ordering(599) 00:15:38.230 fused_ordering(600) 00:15:38.230 fused_ordering(601) 00:15:38.230 fused_ordering(602) 00:15:38.230 fused_ordering(603) 00:15:38.230 fused_ordering(604) 00:15:38.230 fused_ordering(605) 00:15:38.230 fused_ordering(606) 00:15:38.230 fused_ordering(607) 00:15:38.230 fused_ordering(608) 00:15:38.230 fused_ordering(609) 00:15:38.230 fused_ordering(610) 00:15:38.230 fused_ordering(611) 00:15:38.230 fused_ordering(612) 00:15:38.230 fused_ordering(613) 00:15:38.230 fused_ordering(614) 00:15:38.230 fused_ordering(615) 00:15:38.795 fused_ordering(616) 00:15:38.795 fused_ordering(617) 00:15:38.795 fused_ordering(618) 00:15:38.795 fused_ordering(619) 00:15:38.795 fused_ordering(620) 00:15:38.795 fused_ordering(621) 00:15:38.795 fused_ordering(622) 00:15:38.795 fused_ordering(623) 00:15:38.795 fused_ordering(624) 00:15:38.795 fused_ordering(625) 00:15:38.795 fused_ordering(626) 00:15:38.795 fused_ordering(627) 00:15:38.795 fused_ordering(628) 00:15:38.795 fused_ordering(629) 00:15:38.795 fused_ordering(630) 00:15:38.795 fused_ordering(631) 00:15:38.795 fused_ordering(632) 00:15:38.795 fused_ordering(633) 00:15:38.795 fused_ordering(634) 00:15:38.795 fused_ordering(635) 00:15:38.795 fused_ordering(636) 00:15:38.795 fused_ordering(637) 00:15:38.795 fused_ordering(638) 00:15:38.795 fused_ordering(639) 00:15:38.795 fused_ordering(640) 00:15:38.795 fused_ordering(641) 00:15:38.795 fused_ordering(642) 00:15:38.795 fused_ordering(643) 00:15:38.795 fused_ordering(644) 00:15:38.795 fused_ordering(645) 00:15:38.795 fused_ordering(646) 00:15:38.795 fused_ordering(647) 00:15:38.795 fused_ordering(648) 00:15:38.795 fused_ordering(649) 00:15:38.795 fused_ordering(650) 00:15:38.795 fused_ordering(651) 00:15:38.795 fused_ordering(652) 00:15:38.795 fused_ordering(653) 00:15:38.795 fused_ordering(654) 00:15:38.795 fused_ordering(655) 00:15:38.795 fused_ordering(656) 00:15:38.795 fused_ordering(657) 00:15:38.795 fused_ordering(658) 00:15:38.795 fused_ordering(659) 00:15:38.795 fused_ordering(660) 00:15:38.795 fused_ordering(661) 00:15:38.795 fused_ordering(662) 00:15:38.795 fused_ordering(663) 00:15:38.795 fused_ordering(664) 00:15:38.795 fused_ordering(665) 00:15:38.795 fused_ordering(666) 00:15:38.795 fused_ordering(667) 00:15:38.795 fused_ordering(668) 00:15:38.795 fused_ordering(669) 00:15:38.795 fused_ordering(670) 00:15:38.795 fused_ordering(671) 00:15:38.795 fused_ordering(672) 00:15:38.795 fused_ordering(673) 00:15:38.795 fused_ordering(674) 00:15:38.795 fused_ordering(675) 00:15:38.795 fused_ordering(676) 00:15:38.795 fused_ordering(677) 00:15:38.795 fused_ordering(678) 00:15:38.795 fused_ordering(679) 00:15:38.795 fused_ordering(680) 00:15:38.795 fused_ordering(681) 00:15:38.795 fused_ordering(682) 00:15:38.795 fused_ordering(683) 00:15:38.795 fused_ordering(684) 00:15:38.795 fused_ordering(685) 00:15:38.795 fused_ordering(686) 00:15:38.795 fused_ordering(687) 00:15:38.795 fused_ordering(688) 00:15:38.795 fused_ordering(689) 00:15:38.795 fused_ordering(690) 00:15:38.795 fused_ordering(691) 00:15:38.795 fused_ordering(692) 00:15:38.795 fused_ordering(693) 00:15:38.795 fused_ordering(694) 00:15:38.795 fused_ordering(695) 00:15:38.795 fused_ordering(696) 00:15:38.795 fused_ordering(697) 00:15:38.795 fused_ordering(698) 00:15:38.795 fused_ordering(699) 00:15:38.795 fused_ordering(700) 00:15:38.795 fused_ordering(701) 00:15:38.795 fused_ordering(702) 00:15:38.795 fused_ordering(703) 00:15:38.795 fused_ordering(704) 00:15:38.795 fused_ordering(705) 00:15:38.795 fused_ordering(706) 00:15:38.795 fused_ordering(707) 00:15:38.795 fused_ordering(708) 00:15:38.795 fused_ordering(709) 00:15:38.795 fused_ordering(710) 00:15:38.795 fused_ordering(711) 00:15:38.795 fused_ordering(712) 00:15:38.795 fused_ordering(713) 00:15:38.795 fused_ordering(714) 00:15:38.795 fused_ordering(715) 00:15:38.795 fused_ordering(716) 00:15:38.795 fused_ordering(717) 00:15:38.795 fused_ordering(718) 00:15:38.795 fused_ordering(719) 00:15:38.795 fused_ordering(720) 00:15:38.795 fused_ordering(721) 00:15:38.795 fused_ordering(722) 00:15:38.795 fused_ordering(723) 00:15:38.795 fused_ordering(724) 00:15:38.795 fused_ordering(725) 00:15:38.795 fused_ordering(726) 00:15:38.795 fused_ordering(727) 00:15:38.795 fused_ordering(728) 00:15:38.795 fused_ordering(729) 00:15:38.795 fused_ordering(730) 00:15:38.795 fused_ordering(731) 00:15:38.795 fused_ordering(732) 00:15:38.795 fused_ordering(733) 00:15:38.795 fused_ordering(734) 00:15:38.795 fused_ordering(735) 00:15:38.795 fused_ordering(736) 00:15:38.795 fused_ordering(737) 00:15:38.795 fused_ordering(738) 00:15:38.795 fused_ordering(739) 00:15:38.795 fused_ordering(740) 00:15:38.795 fused_ordering(741) 00:15:38.795 fused_ordering(742) 00:15:38.795 fused_ordering(743) 00:15:38.795 fused_ordering(744) 00:15:38.795 fused_ordering(745) 00:15:38.795 fused_ordering(746) 00:15:38.795 fused_ordering(747) 00:15:38.795 fused_ordering(748) 00:15:38.795 fused_ordering(749) 00:15:38.795 fused_ordering(750) 00:15:38.795 fused_ordering(751) 00:15:38.795 fused_ordering(752) 00:15:38.795 fused_ordering(753) 00:15:38.795 fused_ordering(754) 00:15:38.795 fused_ordering(755) 00:15:38.795 fused_ordering(756) 00:15:38.795 fused_ordering(757) 00:15:38.795 fused_ordering(758) 00:15:38.795 fused_ordering(759) 00:15:38.795 fused_ordering(760) 00:15:38.795 fused_ordering(761) 00:15:38.795 fused_ordering(762) 00:15:38.795 fused_ordering(763) 00:15:38.795 fused_ordering(764) 00:15:38.795 fused_ordering(765) 00:15:38.795 fused_ordering(766) 00:15:38.795 fused_ordering(767) 00:15:38.795 fused_ordering(768) 00:15:38.795 fused_ordering(769) 00:15:38.795 fused_ordering(770) 00:15:38.795 fused_ordering(771) 00:15:38.795 fused_ordering(772) 00:15:38.795 fused_ordering(773) 00:15:38.795 fused_ordering(774) 00:15:38.795 fused_ordering(775) 00:15:38.795 fused_ordering(776) 00:15:38.795 fused_ordering(777) 00:15:38.795 fused_ordering(778) 00:15:38.795 fused_ordering(779) 00:15:38.795 fused_ordering(780) 00:15:38.795 fused_ordering(781) 00:15:38.795 fused_ordering(782) 00:15:38.795 fused_ordering(783) 00:15:38.795 fused_ordering(784) 00:15:38.795 fused_ordering(785) 00:15:38.795 fused_ordering(786) 00:15:38.795 fused_ordering(787) 00:15:38.795 fused_ordering(788) 00:15:38.795 fused_ordering(789) 00:15:38.795 fused_ordering(790) 00:15:38.795 fused_ordering(791) 00:15:38.795 fused_ordering(792) 00:15:38.795 fused_ordering(793) 00:15:38.795 fused_ordering(794) 00:15:38.795 fused_ordering(795) 00:15:38.795 fused_ordering(796) 00:15:38.795 fused_ordering(797) 00:15:38.795 fused_ordering(798) 00:15:38.795 fused_ordering(799) 00:15:38.795 fused_ordering(800) 00:15:38.795 fused_ordering(801) 00:15:38.795 fused_ordering(802) 00:15:38.795 fused_ordering(803) 00:15:38.795 fused_ordering(804) 00:15:38.795 fused_ordering(805) 00:15:38.795 fused_ordering(806) 00:15:38.795 fused_ordering(807) 00:15:38.795 fused_ordering(808) 00:15:38.795 fused_ordering(809) 00:15:38.795 fused_ordering(810) 00:15:38.795 fused_ordering(811) 00:15:38.795 fused_ordering(812) 00:15:38.796 fused_ordering(813) 00:15:38.796 fused_ordering(814) 00:15:38.796 fused_ordering(815) 00:15:38.796 fused_ordering(816) 00:15:38.796 fused_ordering(817) 00:15:38.796 fused_ordering(818) 00:15:38.796 fused_ordering(819) 00:15:38.796 fused_ordering(820) 00:15:39.361 fused_ordering(821) 00:15:39.361 fused_ordering(822) 00:15:39.361 fused_ordering(823) 00:15:39.361 fused_ordering(824) 00:15:39.361 fused_ordering(825) 00:15:39.361 fused_ordering(826) 00:15:39.361 fused_ordering(827) 00:15:39.361 fused_ordering(828) 00:15:39.361 fused_ordering(829) 00:15:39.361 fused_ordering(830) 00:15:39.361 fused_ordering(831) 00:15:39.361 fused_ordering(832) 00:15:39.361 fused_ordering(833) 00:15:39.361 fused_ordering(834) 00:15:39.361 fused_ordering(835) 00:15:39.361 fused_ordering(836) 00:15:39.361 fused_ordering(837) 00:15:39.361 fused_ordering(838) 00:15:39.361 fused_ordering(839) 00:15:39.361 fused_ordering(840) 00:15:39.361 fused_ordering(841) 00:15:39.361 fused_ordering(842) 00:15:39.361 fused_ordering(843) 00:15:39.361 fused_ordering(844) 00:15:39.361 fused_ordering(845) 00:15:39.361 fused_ordering(846) 00:15:39.361 fused_ordering(847) 00:15:39.361 fused_ordering(848) 00:15:39.361 fused_ordering(849) 00:15:39.361 fused_ordering(850) 00:15:39.361 fused_ordering(851) 00:15:39.361 fused_ordering(852) 00:15:39.361 fused_ordering(853) 00:15:39.361 fused_ordering(854) 00:15:39.361 fused_ordering(855) 00:15:39.361 fused_ordering(856) 00:15:39.361 fused_ordering(857) 00:15:39.361 fused_ordering(858) 00:15:39.361 fused_ordering(859) 00:15:39.361 fused_ordering(860) 00:15:39.361 fused_ordering(861) 00:15:39.361 fused_ordering(862) 00:15:39.361 fused_ordering(863) 00:15:39.361 fused_ordering(864) 00:15:39.361 fused_ordering(865) 00:15:39.361 fused_ordering(866) 00:15:39.361 fused_ordering(867) 00:15:39.361 fused_ordering(868) 00:15:39.361 fused_ordering(869) 00:15:39.361 fused_ordering(870) 00:15:39.361 fused_ordering(871) 00:15:39.361 fused_ordering(872) 00:15:39.361 fused_ordering(873) 00:15:39.361 fused_ordering(874) 00:15:39.361 fused_ordering(875) 00:15:39.361 fused_ordering(876) 00:15:39.361 fused_ordering(877) 00:15:39.361 fused_ordering(878) 00:15:39.361 fused_ordering(879) 00:15:39.361 fused_ordering(880) 00:15:39.361 fused_ordering(881) 00:15:39.361 fused_ordering(882) 00:15:39.361 fused_ordering(883) 00:15:39.361 fused_ordering(884) 00:15:39.361 fused_ordering(885) 00:15:39.361 fused_ordering(886) 00:15:39.361 fused_ordering(887) 00:15:39.361 fused_ordering(888) 00:15:39.361 fused_ordering(889) 00:15:39.361 fused_ordering(890) 00:15:39.361 fused_ordering(891) 00:15:39.361 fused_ordering(892) 00:15:39.361 fused_ordering(893) 00:15:39.361 fused_ordering(894) 00:15:39.361 fused_ordering(895) 00:15:39.361 fused_ordering(896) 00:15:39.361 fused_ordering(897) 00:15:39.361 fused_ordering(898) 00:15:39.361 fused_ordering(899) 00:15:39.361 fused_ordering(900) 00:15:39.361 fused_ordering(901) 00:15:39.361 fused_ordering(902) 00:15:39.361 fused_ordering(903) 00:15:39.361 fused_ordering(904) 00:15:39.361 fused_ordering(905) 00:15:39.361 fused_ordering(906) 00:15:39.361 fused_ordering(907) 00:15:39.361 fused_ordering(908) 00:15:39.361 fused_ordering(909) 00:15:39.361 fused_ordering(910) 00:15:39.361 fused_ordering(911) 00:15:39.361 fused_ordering(912) 00:15:39.361 fused_ordering(913) 00:15:39.361 fused_ordering(914) 00:15:39.362 fused_ordering(915) 00:15:39.362 fused_ordering(916) 00:15:39.362 fused_ordering(917) 00:15:39.362 fused_ordering(918) 00:15:39.362 fused_ordering(919) 00:15:39.362 fused_ordering(920) 00:15:39.362 fused_ordering(921) 00:15:39.362 fused_ordering(922) 00:15:39.362 fused_ordering(923) 00:15:39.362 fused_ordering(924) 00:15:39.362 fused_ordering(925) 00:15:39.362 fused_ordering(926) 00:15:39.362 fused_ordering(927) 00:15:39.362 fused_ordering(928) 00:15:39.362 fused_ordering(929) 00:15:39.362 fused_ordering(930) 00:15:39.362 fused_ordering(931) 00:15:39.362 fused_ordering(932) 00:15:39.362 fused_ordering(933) 00:15:39.362 fused_ordering(934) 00:15:39.362 fused_ordering(935) 00:15:39.362 fused_ordering(936) 00:15:39.362 fused_ordering(937) 00:15:39.362 fused_ordering(938) 00:15:39.362 fused_ordering(939) 00:15:39.362 fused_ordering(940) 00:15:39.362 fused_ordering(941) 00:15:39.362 fused_ordering(942) 00:15:39.362 fused_ordering(943) 00:15:39.362 fused_ordering(944) 00:15:39.362 fused_ordering(945) 00:15:39.362 fused_ordering(946) 00:15:39.362 fused_ordering(947) 00:15:39.362 fused_ordering(948) 00:15:39.362 fused_ordering(949) 00:15:39.362 fused_ordering(950) 00:15:39.362 fused_ordering(951) 00:15:39.362 fused_ordering(952) 00:15:39.362 fused_ordering(953) 00:15:39.362 fused_ordering(954) 00:15:39.362 fused_ordering(955) 00:15:39.362 fused_ordering(956) 00:15:39.362 fused_ordering(957) 00:15:39.362 fused_ordering(958) 00:15:39.362 fused_ordering(959) 00:15:39.362 fused_ordering(960) 00:15:39.362 fused_ordering(961) 00:15:39.362 fused_ordering(962) 00:15:39.362 fused_ordering(963) 00:15:39.362 fused_ordering(964) 00:15:39.362 fused_ordering(965) 00:15:39.362 fused_ordering(966) 00:15:39.362 fused_ordering(967) 00:15:39.362 fused_ordering(968) 00:15:39.362 fused_ordering(969) 00:15:39.362 fused_ordering(970) 00:15:39.362 fused_ordering(971) 00:15:39.362 fused_ordering(972) 00:15:39.362 fused_ordering(973) 00:15:39.362 fused_ordering(974) 00:15:39.362 fused_ordering(975) 00:15:39.362 fused_ordering(976) 00:15:39.362 fused_ordering(977) 00:15:39.362 fused_ordering(978) 00:15:39.362 fused_ordering(979) 00:15:39.362 fused_ordering(980) 00:15:39.362 fused_ordering(981) 00:15:39.362 fused_ordering(982) 00:15:39.362 fused_ordering(983) 00:15:39.362 fused_ordering(984) 00:15:39.362 fused_ordering(985) 00:15:39.362 fused_ordering(986) 00:15:39.362 fused_ordering(987) 00:15:39.362 fused_ordering(988) 00:15:39.362 fused_ordering(989) 00:15:39.362 fused_ordering(990) 00:15:39.362 fused_ordering(991) 00:15:39.362 fused_ordering(992) 00:15:39.362 fused_ordering(993) 00:15:39.362 fused_ordering(994) 00:15:39.362 fused_ordering(995) 00:15:39.362 fused_ordering(996) 00:15:39.362 fused_ordering(997) 00:15:39.362 fused_ordering(998) 00:15:39.362 fused_ordering(999) 00:15:39.362 fused_ordering(1000) 00:15:39.362 fused_ordering(1001) 00:15:39.362 fused_ordering(1002) 00:15:39.362 fused_ordering(1003) 00:15:39.362 fused_ordering(1004) 00:15:39.362 fused_ordering(1005) 00:15:39.362 fused_ordering(1006) 00:15:39.362 fused_ordering(1007) 00:15:39.362 fused_ordering(1008) 00:15:39.362 fused_ordering(1009) 00:15:39.362 fused_ordering(1010) 00:15:39.362 fused_ordering(1011) 00:15:39.362 fused_ordering(1012) 00:15:39.362 fused_ordering(1013) 00:15:39.362 fused_ordering(1014) 00:15:39.362 fused_ordering(1015) 00:15:39.362 fused_ordering(1016) 00:15:39.362 fused_ordering(1017) 00:15:39.362 fused_ordering(1018) 00:15:39.362 fused_ordering(1019) 00:15:39.362 fused_ordering(1020) 00:15:39.362 fused_ordering(1021) 00:15:39.362 fused_ordering(1022) 00:15:39.362 fused_ordering(1023) 00:15:39.362 09:56:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:39.362 09:56:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:39.362 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:39.362 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:39.621 rmmod nvme_tcp 00:15:39.621 rmmod nvme_fabrics 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71135 ']' 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71135 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 71135 ']' 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 71135 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 71135 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 71135' 00:15:39.621 killing process with pid 71135 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 71135 00:15:39.621 [2024-05-15 09:56:16.845234] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:39.621 09:56:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 71135 00:15:39.879 09:56:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:39.879 09:56:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:39.879 09:56:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:39.879 09:56:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.879 09:56:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.879 09:56:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.879 09:56:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.879 09:56:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.138 09:56:17 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:40.138 00:15:40.138 real 0m4.629s 00:15:40.138 user 0m5.408s 00:15:40.138 sys 0m1.687s 00:15:40.138 09:56:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:40.138 ************************************ 00:15:40.138 END TEST nvmf_fused_ordering 00:15:40.138 ************************************ 00:15:40.138 09:56:17 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.138 09:56:17 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:40.138 09:56:17 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:40.138 09:56:17 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:40.138 09:56:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.138 ************************************ 00:15:40.138 START TEST nvmf_delete_subsystem 00:15:40.138 ************************************ 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:40.138 * Looking for test storage... 00:15:40.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.138 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:40.139 Cannot find device "nvmf_tgt_br" 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.139 Cannot find device "nvmf_tgt_br2" 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:15:40.139 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:40.397 Cannot find device "nvmf_tgt_br" 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:40.397 Cannot find device "nvmf_tgt_br2" 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.397 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:40.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:15:40.655 00:15:40.655 --- 10.0.0.2 ping statistics --- 00:15:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.655 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:40.655 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.655 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:40.655 00:15:40.655 --- 10.0.0.3 ping statistics --- 00:15:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.655 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:40.655 00:15:40.655 --- 10.0.0.1 ping statistics --- 00:15:40.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.655 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71401 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71401 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 71401 ']' 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:40.655 09:56:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:40.655 [2024-05-15 09:56:17.895703] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:15:40.655 [2024-05-15 09:56:17.896085] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.914 [2024-05-15 09:56:18.041292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:40.914 [2024-05-15 09:56:18.208434] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.914 [2024-05-15 09:56:18.208724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.914 [2024-05-15 09:56:18.208888] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.914 [2024-05-15 09:56:18.209038] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.914 [2024-05-15 09:56:18.209126] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.914 [2024-05-15 09:56:18.210059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.914 [2024-05-15 09:56:18.210060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 [2024-05-15 09:56:18.958043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 [2024-05-15 09:56:18.984007] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:41.848 [2024-05-15 09:56:18.984666] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.848 09:56:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 NULL1 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 Delay0 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71456 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:41.848 09:56:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:41.848 [2024-05-15 09:56:19.190893] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:43.748 09:56:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:43.748 09:56:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:43.748 09:56:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Write completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 starting I/O failed: -6 00:15:44.006 Read completed with error (sct=0, sc=8) 00:15:44.006 [2024-05-15 09:56:21.228233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaefce0 is same w[2024-05-15 09:56:21.228337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 [2024-05-15 09:56:21.228521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f874f0 is same with the state(5) to be set 00:15:44.006 ith the state(5) to be set 00:15:44.007 [2024-05-15 09:56:21.228673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f87e70 is same with Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 the state(5) to be set 00:15:44.007 [2024-05-15 09:56:21.229056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f87e70 is same with the state(5) to be set 00:15:44.007 [2024-05-15 09:56:21.229142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f87e70 is same with the state(5) to be set 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 [2024-05-15 09:56:21.237971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f27e8000c00 is same with the state(5) to be set 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 starting I/O failed: -6 00:15:44.007 [2024-05-15 09:56:21.250414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f27e800c470 is same with the state(5) to be set 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Write completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.007 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Read completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 Write completed with error (sct=0, sc=8) 00:15:44.008 [2024-05-15 09:56:21.257706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f27e800c780 is same with the state(5) to be set 00:15:44.941 [2024-05-15 09:56:22.212608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaef100 is same with the state(5) to be set 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 [2024-05-15 09:56:22.224887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf1220 is same with the state(5) to be set 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Write completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 Read completed with error (sct=0, sc=8) 00:15:44.941 [2024-05-15 09:56:22.225973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaefff0 is same with the state(5) to be set 00:15:44.941 Initializing NVMe Controllers 00:15:44.941 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:44.941 Controller IO queue size 128, less than required. 00:15:44.941 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:44.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:44.941 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:44.941 Initialization complete. Launching workers. 00:15:44.941 ======================================================== 00:15:44.941 Latency(us) 00:15:44.941 Device Information : IOPS MiB/s Average min max 00:15:44.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.41 0.08 906346.79 899.73 1012062.35 00:15:44.941 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.37 0.08 751826.97 7015.44 1031902.36 00:15:44.941 ======================================================== 00:15:44.941 Total : 337.78 0.16 827496.23 899.73 1031902.36 00:15:44.941 00:15:44.941 [2024-05-15 09:56:22.226626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaef100 (9): Bad file descriptor 00:15:44.941 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:44.941 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:44.941 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:44.941 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71456 00:15:44.941 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71456 00:15:45.508 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71456) - No such process 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71456 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 71456 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:45.508 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 71456 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:45.509 [2024-05-15 09:56:22.750266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71502 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71502 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:45.509 09:56:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:45.768 [2024-05-15 09:56:22.944932] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:46.026 09:56:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:46.026 09:56:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71502 00:15:46.026 09:56:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:46.592 09:56:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:46.592 09:56:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71502 00:15:46.592 09:56:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:47.158 09:56:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:47.158 09:56:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71502 00:15:47.158 09:56:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:47.415 09:56:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:47.415 09:56:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71502 00:15:47.415 09:56:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:47.981 09:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:47.981 09:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71502 00:15:47.981 09:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:48.547 09:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:48.547 09:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71502 00:15:48.547 09:56:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:48.805 Initializing NVMe Controllers 00:15:48.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.806 Controller IO queue size 128, less than required. 00:15:48.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:48.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:48.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:48.806 Initialization complete. Launching workers. 00:15:48.806 ======================================================== 00:15:48.806 Latency(us) 00:15:48.806 Device Information : IOPS MiB/s Average min max 00:15:48.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005049.66 1000209.18 1042278.51 00:15:48.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1007607.61 1000364.12 1042562.29 00:15:48.806 ======================================================== 00:15:48.806 Total : 256.00 0.12 1006328.64 1000209.18 1042562.29 00:15:48.806 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71502 00:15:49.064 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71502) - No such process 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71502 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.064 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.065 rmmod nvme_tcp 00:15:49.065 rmmod nvme_fabrics 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71401 ']' 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71401 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 71401 ']' 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 71401 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 71401 00:15:49.065 killing process with pid 71401 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 71401' 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 71401 00:15:49.065 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 71401 00:15:49.065 [2024-05-15 09:56:26.419791] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:49.677 ************************************ 00:15:49.677 END TEST nvmf_delete_subsystem 00:15:49.677 ************************************ 00:15:49.677 00:15:49.677 real 0m9.529s 00:15:49.677 user 0m27.521s 00:15:49.677 sys 0m2.011s 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:49.677 09:56:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:49.677 09:56:26 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:49.677 09:56:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:49.677 09:56:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:49.677 09:56:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:49.677 ************************************ 00:15:49.677 START TEST nvmf_ns_masking 00:15:49.677 ************************************ 00:15:49.677 09:56:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:49.677 * Looking for test storage... 00:15:49.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:49.677 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=3b60ebaa-5a02-4cad-b5f1-9a44273b0f28 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:49.936 Cannot find device "nvmf_tgt_br" 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.936 Cannot find device "nvmf_tgt_br2" 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:49.936 Cannot find device "nvmf_tgt_br" 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:49.936 Cannot find device "nvmf_tgt_br2" 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.936 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:50.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:15:50.196 00:15:50.196 --- 10.0.0.2 ping statistics --- 00:15:50.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.196 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:50.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:50.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:15:50.196 00:15:50.196 --- 10.0.0.3 ping statistics --- 00:15:50.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.196 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:50.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:15:50.196 00:15:50.196 --- 10.0.0.1 ping statistics --- 00:15:50.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.196 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=71733 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 71733 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 71733 ']' 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:50.196 09:56:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:50.196 [2024-05-15 09:56:27.562037] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:15:50.196 [2024-05-15 09:56:27.562245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.454 [2024-05-15 09:56:27.709966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.713 [2024-05-15 09:56:27.876224] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.713 [2024-05-15 09:56:27.876279] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.713 [2024-05-15 09:56:27.876292] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.713 [2024-05-15 09:56:27.876303] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.713 [2024-05-15 09:56:27.876311] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.713 [2024-05-15 09:56:27.876446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.713 [2024-05-15 09:56:27.877382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.713 [2024-05-15 09:56:27.878077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.713 [2024-05-15 09:56:27.878085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.278 09:56:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:51.278 09:56:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:15:51.278 09:56:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.278 09:56:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:51.278 09:56:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:51.278 09:56:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.278 09:56:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:51.536 [2024-05-15 09:56:28.862444] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.794 09:56:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:15:51.794 09:56:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:15:51.794 09:56:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:52.052 Malloc1 00:15:52.052 09:56:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:52.308 Malloc2 00:15:52.565 09:56:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:52.821 09:56:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:53.077 09:56:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.334 [2024-05-15 09:56:30.493494] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:53.334 [2024-05-15 09:56:30.494736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.334 09:56:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:15:53.334 09:56:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b60ebaa-5a02-4cad-b5f1-9a44273b0f28 -a 10.0.0.2 -s 4420 -i 4 00:15:53.334 09:56:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:15:53.334 09:56:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:15:53.334 09:56:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:15:53.334 09:56:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:15:53.334 09:56:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:55.857 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:55.858 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:15:55.858 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:55.858 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:55.858 [ 0]:0x1 00:15:55.858 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:55.858 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:55.858 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=93a5c897e9dc4269aaac0f429dd78850 00:15:55.858 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 93a5c897e9dc4269aaac0f429dd78850 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:55.858 09:56:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:55.858 [ 0]:0x1 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=93a5c897e9dc4269aaac0f429dd78850 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 93a5c897e9dc4269aaac0f429dd78850 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:55.858 [ 1]:0x2 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61f69806e98b4f47b1259f521ba49255 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61f69806e98b4f47b1259f521ba49255 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.858 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.115 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:56.390 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:15:56.390 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b60ebaa-5a02-4cad-b5f1-9a44273b0f28 -a 10.0.0.2 -s 4420 -i 4 00:15:56.654 09:56:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:56.654 09:56:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:15:56.654 09:56:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.654 09:56:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:15:56.654 09:56:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:15:56.654 09:56:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:58.611 [ 0]:0x2 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61f69806e98b4f47b1259f521ba49255 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61f69806e98b4f47b1259f521ba49255 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:58.611 09:56:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:59.176 [ 0]:0x1 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=93a5c897e9dc4269aaac0f429dd78850 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 93a5c897e9dc4269aaac0f429dd78850 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.176 [ 1]:0x2 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61f69806e98b4f47b1259f521ba49255 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61f69806e98b4f47b1259f521ba49255 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.176 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:15:59.434 [ 0]:0x2 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61f69806e98b4f47b1259f521ba49255 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61f69806e98b4f47b1259f521ba49255 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:15:59.434 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:59.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.693 09:56:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:59.950 09:56:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:15:59.950 09:56:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3b60ebaa-5a02-4cad-b5f1-9a44273b0f28 -a 10.0.0.2 -s 4420 -i 4 00:15:59.950 09:56:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:59.950 09:56:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:15:59.950 09:56:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.950 09:56:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:15:59.950 09:56:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:15:59.950 09:56:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:16:01.861 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:01.861 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.861 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:01.861 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:16:01.861 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.861 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:16:01.861 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:01.861 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:02.119 [ 0]:0x1 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=93a5c897e9dc4269aaac0f429dd78850 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 93a5c897e9dc4269aaac0f429dd78850 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:02.119 [ 1]:0x2 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61f69806e98b4f47b1259f521ba49255 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61f69806e98b4f47b1259f521ba49255 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.119 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.377 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.636 [ 0]:0x2 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61f69806e98b4f47b1259f521ba49255 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61f69806e98b4f47b1259f521ba49255 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:02.636 09:56:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:02.894 [2024-05-15 09:56:40.115748] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:02.894 2024/05/15 09:56:40 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:16:02.894 request: 00:16:02.894 { 00:16:02.894 "method": "nvmf_ns_remove_host", 00:16:02.894 "params": { 00:16:02.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.894 "nsid": 2, 00:16:02.894 "host": "nqn.2016-06.io.spdk:host1" 00:16:02.894 } 00:16:02.894 } 00:16:02.894 Got JSON-RPC error response 00:16:02.894 GoRPCClient: error on JSON-RPC call 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:02.894 [ 0]:0x2 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=61f69806e98b4f47b1259f521ba49255 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 61f69806e98b4f47b1259f521ba49255 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:16:02.894 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:03.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.153 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:03.411 rmmod nvme_tcp 00:16:03.411 rmmod nvme_fabrics 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 71733 ']' 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 71733 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 71733 ']' 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 71733 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 71733 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 71733' 00:16:03.411 killing process with pid 71733 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 71733 00:16:03.411 [2024-05-15 09:56:40.740002] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:03.411 09:56:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 71733 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:03.979 00:16:03.979 real 0m14.329s 00:16:03.979 user 0m55.685s 00:16:03.979 sys 0m3.430s 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:03.979 09:56:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:03.979 ************************************ 00:16:03.979 END TEST nvmf_ns_masking 00:16:03.979 ************************************ 00:16:03.979 09:56:41 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:16:03.979 09:56:41 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:03.979 09:56:41 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:03.979 09:56:41 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:03.979 09:56:41 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:03.979 09:56:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:03.979 ************************************ 00:16:03.979 START TEST nvmf_host_management 00:16:03.979 ************************************ 00:16:03.979 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:04.238 * Looking for test storage... 00:16:04.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:04.238 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:04.238 Cannot find device "nvmf_tgt_br" 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:04.239 Cannot find device "nvmf_tgt_br2" 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:04.239 Cannot find device "nvmf_tgt_br" 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:04.239 Cannot find device "nvmf_tgt_br2" 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:04.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:04.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:04.239 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:04.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:16:04.498 00:16:04.498 --- 10.0.0.2 ping statistics --- 00:16:04.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.498 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:04.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:04.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:16:04.498 00:16:04.498 --- 10.0.0.3 ping statistics --- 00:16:04.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.498 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:04.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:16:04.498 00:16:04.498 --- 10.0.0.1 ping statistics --- 00:16:04.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.498 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72314 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72314 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 72314 ']' 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:04.498 09:56:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:04.756 [2024-05-15 09:56:41.918897] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:16:04.756 [2024-05-15 09:56:41.919003] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.756 [2024-05-15 09:56:42.076327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.015 [2024-05-15 09:56:42.242138] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.015 [2024-05-15 09:56:42.242219] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.015 [2024-05-15 09:56:42.242231] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.015 [2024-05-15 09:56:42.242240] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.015 [2024-05-15 09:56:42.242248] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.015 [2024-05-15 09:56:42.243050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.015 [2024-05-15 09:56:42.243149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.015 [2024-05-15 09:56:42.243310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:05.015 [2024-05-15 09:56:42.243312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.967 09:56:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:05.967 09:56:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:16:05.967 09:56:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:05.967 09:56:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:05.967 09:56:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.967 [2024-05-15 09:56:43.042561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.967 Malloc0 00:16:05.967 [2024-05-15 09:56:43.144248] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:05.967 [2024-05-15 09:56:43.144566] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72397 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72397 /var/tmp/bdevperf.sock 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 72397 ']' 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:05.967 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:05.967 { 00:16:05.967 "params": { 00:16:05.967 "name": "Nvme$subsystem", 00:16:05.967 "trtype": "$TEST_TRANSPORT", 00:16:05.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:05.968 "adrfam": "ipv4", 00:16:05.968 "trsvcid": "$NVMF_PORT", 00:16:05.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:05.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:05.968 "hdgst": ${hdgst:-false}, 00:16:05.968 "ddgst": ${ddgst:-false} 00:16:05.968 }, 00:16:05.968 "method": "bdev_nvme_attach_controller" 00:16:05.968 } 00:16:05.968 EOF 00:16:05.968 )") 00:16:05.968 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:05.968 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:05.968 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:05.968 09:56:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:05.968 "params": { 00:16:05.968 "name": "Nvme0", 00:16:05.968 "trtype": "tcp", 00:16:05.968 "traddr": "10.0.0.2", 00:16:05.968 "adrfam": "ipv4", 00:16:05.968 "trsvcid": "4420", 00:16:05.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:05.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:05.968 "hdgst": false, 00:16:05.968 "ddgst": false 00:16:05.968 }, 00:16:05.968 "method": "bdev_nvme_attach_controller" 00:16:05.968 }' 00:16:05.968 [2024-05-15 09:56:43.260389] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:16:05.968 [2024-05-15 09:56:43.260494] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72397 ] 00:16:06.233 [2024-05-15 09:56:43.411203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.233 [2024-05-15 09:56:43.588229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.491 Running I/O for 10 seconds... 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:07.058 [2024-05-15 09:56:44.414495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.058 [2024-05-15 09:56:44.414716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.414904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.058 [2024-05-15 09:56:44.415081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.415272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.058 [2024-05-15 09:56:44.415418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.415524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.058 [2024-05-15 09:56:44.415615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.415702] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e0740 is same with the state(5) to be set 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:07.058 09:56:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:07.058 [2024-05-15 09:56:44.427187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e0740 (9): Bad file descriptor 00:16:07.058 [2024-05-15 09:56:44.427467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.427602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.427759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.427927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.428067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.428227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.428373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.428512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.428659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.428793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.428906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.428989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.429112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.429245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.429354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.429460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.429561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.429650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.429796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.429921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.430020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.430168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.430316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.430462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.430568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.430650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.430742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.430826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.430890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.430970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.431064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.431128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.058 [2024-05-15 09:56:44.431235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.058 [2024-05-15 09:56:44.431315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.431419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.431536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.431627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.431688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.431744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.431845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.431902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.432005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.432172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.432257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.432320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.432376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.432466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.432519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.432614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.432668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.432765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.432853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.432924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.433004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.433075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.433177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.433233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.433287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.433372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.433426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.433518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.433573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.433636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.433690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.433751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.433929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.434004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.434061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.434129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.434216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.434276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.434367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.434421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.434518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.434612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.434673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.434791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.434888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.435016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.435132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.435224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.435335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.435457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.435565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.435655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.435756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.435894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.436037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.436137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.436202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.436263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.436342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.436399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.436508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.436588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.436683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.436757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.436826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.436879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.436975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.437027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.437139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.437246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.437343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.437476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.437640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.437720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.437803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.437857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.059 [2024-05-15 09:56:44.437940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.059 [2024-05-15 09:56:44.437993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.438065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.319 [2024-05-15 09:56:44.438147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.438222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.319 [2024-05-15 09:56:44.438286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.438389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.319 [2024-05-15 09:56:44.438448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.438532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.319 [2024-05-15 09:56:44.438586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.438703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.319 [2024-05-15 09:56:44.438763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.438848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.319 [2024-05-15 09:56:44.438903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.439006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.319 [2024-05-15 09:56:44.439057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.439218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:07.319 [2024-05-15 09:56:44.439325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.319 [2024-05-15 09:56:44.439595] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e24f0 was disconnected and freed. reset controller. 00:16:07.319 [2024-05-15 09:56:44.439781] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:07.319 [2024-05-15 09:56:44.440960] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:07.319 task offset: 114688 on job bdev=Nvme0n1 fails 00:16:07.319 00:16:07.319 Latency(us) 00:16:07.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.319 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:07.319 Job: Nvme0n1 ended in about 0.61 seconds with error 00:16:07.319 Verification LBA range: start 0x0 length 0x400 00:16:07.319 Nvme0n1 : 0.61 1479.43 92.46 105.67 0.00 39413.86 9674.36 38947.11 00:16:07.319 =================================================================================================================== 00:16:07.319 Total : 1479.43 92.46 105.67 0.00 39413.86 9674.36 38947.11 00:16:07.319 [2024-05-15 09:56:44.444094] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:07.319 [2024-05-15 09:56:44.453054] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72397 00:16:08.303 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72397) - No such process 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:08.303 { 00:16:08.303 "params": { 00:16:08.303 "name": "Nvme$subsystem", 00:16:08.303 "trtype": "$TEST_TRANSPORT", 00:16:08.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:08.303 "adrfam": "ipv4", 00:16:08.303 "trsvcid": "$NVMF_PORT", 00:16:08.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:08.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:08.303 "hdgst": ${hdgst:-false}, 00:16:08.303 "ddgst": ${ddgst:-false} 00:16:08.303 }, 00:16:08.303 "method": "bdev_nvme_attach_controller" 00:16:08.303 } 00:16:08.303 EOF 00:16:08.303 )") 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:08.303 09:56:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:08.303 "params": { 00:16:08.303 "name": "Nvme0", 00:16:08.303 "trtype": "tcp", 00:16:08.303 "traddr": "10.0.0.2", 00:16:08.303 "adrfam": "ipv4", 00:16:08.303 "trsvcid": "4420", 00:16:08.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:08.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:08.303 "hdgst": false, 00:16:08.303 "ddgst": false 00:16:08.303 }, 00:16:08.303 "method": "bdev_nvme_attach_controller" 00:16:08.303 }' 00:16:08.303 [2024-05-15 09:56:45.490192] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:16:08.303 [2024-05-15 09:56:45.490618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72447 ] 00:16:08.303 [2024-05-15 09:56:45.635056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.560 [2024-05-15 09:56:45.798133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.818 Running I/O for 1 seconds... 00:16:09.753 00:16:09.753 Latency(us) 00:16:09.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.753 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:09.753 Verification LBA range: start 0x0 length 0x400 00:16:09.753 Nvme0n1 : 1.02 1512.88 94.56 0.00 0.00 41536.97 7021.71 36700.16 00:16:09.753 =================================================================================================================== 00:16:09.753 Total : 1512.88 94.56 0.00 0.00 41536.97 7021.71 36700.16 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:10.319 rmmod nvme_tcp 00:16:10.319 rmmod nvme_fabrics 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72314 ']' 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72314 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 72314 ']' 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 72314 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72314 00:16:10.319 killing process with pid 72314 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:10.319 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:10.320 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72314' 00:16:10.320 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 72314 00:16:10.320 [2024-05-15 09:56:47.538967] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:10.320 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 72314 00:16:10.578 [2024-05-15 09:56:47.916073] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:10.578 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:10.578 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:10.578 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:10.578 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:10.578 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:10.578 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.578 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.578 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.837 09:56:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:10.837 09:56:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:10.837 ************************************ 00:16:10.837 END TEST nvmf_host_management 00:16:10.837 ************************************ 00:16:10.837 00:16:10.837 real 0m6.698s 00:16:10.837 user 0m25.506s 00:16:10.837 sys 0m1.946s 00:16:10.837 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:10.837 09:56:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.837 09:56:48 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:10.837 09:56:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:10.837 09:56:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:10.837 09:56:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:10.837 ************************************ 00:16:10.837 START TEST nvmf_lvol 00:16:10.837 ************************************ 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:10.837 * Looking for test storage... 00:16:10.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.837 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:10.838 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:11.095 Cannot find device "nvmf_tgt_br" 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:11.095 Cannot find device "nvmf_tgt_br2" 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:11.095 Cannot find device "nvmf_tgt_br" 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:11.095 Cannot find device "nvmf_tgt_br2" 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:11.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:11.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:11.095 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:11.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:16:11.353 00:16:11.353 --- 10.0.0.2 ping statistics --- 00:16:11.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.353 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:11.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:11.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:16:11.353 00:16:11.353 --- 10.0.0.3 ping statistics --- 00:16:11.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.353 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:11.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:16:11.353 00:16:11.353 --- 10.0.0.1 ping statistics --- 00:16:11.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.353 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=72658 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 72658 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 72658 ']' 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:11.353 09:56:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:11.353 [2024-05-15 09:56:48.644589] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:16:11.353 [2024-05-15 09:56:48.644707] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.609 [2024-05-15 09:56:48.789293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:11.609 [2024-05-15 09:56:48.948398] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.609 [2024-05-15 09:56:48.948482] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.609 [2024-05-15 09:56:48.948493] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.609 [2024-05-15 09:56:48.948503] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.609 [2024-05-15 09:56:48.948511] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.609 [2024-05-15 09:56:48.948743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.609 [2024-05-15 09:56:48.948867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.609 [2024-05-15 09:56:48.948872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.540 09:56:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:12.540 09:56:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:16:12.540 09:56:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.540 09:56:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:12.540 09:56:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:12.540 09:56:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.541 09:56:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:12.797 [2024-05-15 09:56:49.998364] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.797 09:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.054 09:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:13.054 09:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:13.619 09:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:13.619 09:56:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:13.876 09:56:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:14.133 09:56:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=049bc8a9-4c30-42c1-b8f7-3cd8ae090c72 00:16:14.133 09:56:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 049bc8a9-4c30-42c1-b8f7-3cd8ae090c72 lvol 20 00:16:14.391 09:56:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=aa77f45a-9870-4970-8b1d-b7611adb0bea 00:16:14.391 09:56:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:14.648 09:56:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 aa77f45a-9870-4970-8b1d-b7611adb0bea 00:16:14.905 09:56:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:15.163 [2024-05-15 09:56:52.478252] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:15.163 [2024-05-15 09:56:52.478607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.163 09:56:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:15.421 09:56:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=72815 00:16:15.421 09:56:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:15.421 09:56:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:16.793 09:56:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot aa77f45a-9870-4970-8b1d-b7611adb0bea MY_SNAPSHOT 00:16:16.793 09:56:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0001171e-5e10-489a-827b-b3122f52d70e 00:16:16.793 09:56:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize aa77f45a-9870-4970-8b1d-b7611adb0bea 30 00:16:17.052 09:56:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0001171e-5e10-489a-827b-b3122f52d70e MY_CLONE 00:16:17.618 09:56:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5ee640fe-4ff1-4df7-b388-0f10eab4b987 00:16:17.618 09:56:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5ee640fe-4ff1-4df7-b388-0f10eab4b987 00:16:18.238 09:56:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 72815 00:16:26.348 Initializing NVMe Controllers 00:16:26.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:26.348 Controller IO queue size 128, less than required. 00:16:26.348 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:26.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:26.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:26.348 Initialization complete. Launching workers. 00:16:26.348 ======================================================== 00:16:26.348 Latency(us) 00:16:26.348 Device Information : IOPS MiB/s Average min max 00:16:26.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9392.80 36.69 13639.46 2500.26 98074.23 00:16:26.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9723.50 37.98 13171.98 249.38 78669.94 00:16:26.348 ======================================================== 00:16:26.348 Total : 19116.30 74.67 13401.68 249.38 98074.23 00:16:26.348 00:16:26.348 09:57:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:26.348 09:57:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete aa77f45a-9870-4970-8b1d-b7611adb0bea 00:16:26.606 09:57:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 049bc8a9-4c30-42c1-b8f7-3cd8ae090c72 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:26.864 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:26.864 rmmod nvme_tcp 00:16:26.864 rmmod nvme_fabrics 00:16:27.122 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 72658 ']' 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 72658 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 72658 ']' 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 72658 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 72658 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 72658' 00:16:27.123 killing process with pid 72658 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 72658 00:16:27.123 [2024-05-15 09:57:04.288918] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:27.123 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 72658 00:16:27.381 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:27.381 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:27.381 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:27.381 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.381 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:27.381 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.381 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.381 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.639 09:57:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:27.639 00:16:27.639 real 0m16.719s 00:16:27.639 user 1m6.892s 00:16:27.639 sys 0m5.872s 00:16:27.639 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:27.639 09:57:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:27.639 ************************************ 00:16:27.639 END TEST nvmf_lvol 00:16:27.639 ************************************ 00:16:27.639 09:57:04 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:27.639 09:57:04 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:27.639 09:57:04 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:27.639 09:57:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:27.639 ************************************ 00:16:27.639 START TEST nvmf_lvs_grow 00:16:27.639 ************************************ 00:16:27.639 09:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:27.640 * Looking for test storage... 00:16:27.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:27.640 Cannot find device "nvmf_tgt_br" 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:27.640 Cannot find device "nvmf_tgt_br2" 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:27.640 09:57:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:27.640 Cannot find device "nvmf_tgt_br" 00:16:27.640 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:16:27.640 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:27.898 Cannot find device "nvmf_tgt_br2" 00:16:27.898 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:16:27.898 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.899 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:28.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:16:28.157 00:16:28.157 --- 10.0.0.2 ping statistics --- 00:16:28.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.157 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:28.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:28.157 00:16:28.157 --- 10.0.0.3 ping statistics --- 00:16:28.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.157 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:16:28.157 00:16:28.157 --- 10.0.0.1 ping statistics --- 00:16:28.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.157 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73176 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73176 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 73176 ']' 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:28.157 09:57:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:28.157 [2024-05-15 09:57:05.408479] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:16:28.157 [2024-05-15 09:57:05.408569] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.416 [2024-05-15 09:57:05.544835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.416 [2024-05-15 09:57:05.706632] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.416 [2024-05-15 09:57:05.706699] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.416 [2024-05-15 09:57:05.706710] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.416 [2024-05-15 09:57:05.706721] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.416 [2024-05-15 09:57:05.706729] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.416 [2024-05-15 09:57:05.706769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:29.368 [2024-05-15 09:57:06.638169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:29.368 ************************************ 00:16:29.368 START TEST lvs_grow_clean 00:16:29.368 ************************************ 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:29.368 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:29.626 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:29.626 09:57:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:29.884 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:29.884 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:29.884 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:30.449 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:30.450 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:30.450 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 lvol 150 00:16:30.708 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d02c0d1-2d5e-4f3b-89d8-a112aed798c2 00:16:30.708 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:30.708 09:57:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:30.966 [2024-05-15 09:57:08.191968] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:30.966 [2024-05-15 09:57:08.192056] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:30.966 true 00:16:30.966 09:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:30.966 09:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:31.225 09:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:31.225 09:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:31.485 09:57:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d02c0d1-2d5e-4f3b-89d8-a112aed798c2 00:16:31.743 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:32.001 [2024-05-15 09:57:09.348346] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:32.001 [2024-05-15 09:57:09.348704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.258 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73349 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73349 /var/tmp/bdevperf.sock 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 73349 ']' 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:32.516 09:57:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:32.516 [2024-05-15 09:57:09.723211] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:16:32.516 [2024-05-15 09:57:09.723540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73349 ] 00:16:32.516 [2024-05-15 09:57:09.870476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.773 [2024-05-15 09:57:09.989179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.706 09:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:33.706 09:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:16:33.706 09:57:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:33.706 Nvme0n1 00:16:33.706 09:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:33.964 [ 00:16:33.964 { 00:16:33.964 "aliases": [ 00:16:33.964 "0d02c0d1-2d5e-4f3b-89d8-a112aed798c2" 00:16:33.964 ], 00:16:33.964 "assigned_rate_limits": { 00:16:33.964 "r_mbytes_per_sec": 0, 00:16:33.964 "rw_ios_per_sec": 0, 00:16:33.964 "rw_mbytes_per_sec": 0, 00:16:33.964 "w_mbytes_per_sec": 0 00:16:33.964 }, 00:16:33.964 "block_size": 4096, 00:16:33.964 "claimed": false, 00:16:33.964 "driver_specific": { 00:16:33.964 "mp_policy": "active_passive", 00:16:33.964 "nvme": [ 00:16:33.964 { 00:16:33.964 "ctrlr_data": { 00:16:33.964 "ana_reporting": false, 00:16:33.964 "cntlid": 1, 00:16:33.964 "firmware_revision": "24.05", 00:16:33.964 "model_number": "SPDK bdev Controller", 00:16:33.964 "multi_ctrlr": true, 00:16:33.964 "oacs": { 00:16:33.964 "firmware": 0, 00:16:33.964 "format": 0, 00:16:33.964 "ns_manage": 0, 00:16:33.964 "security": 0 00:16:33.964 }, 00:16:33.964 "serial_number": "SPDK0", 00:16:33.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.964 "vendor_id": "0x8086" 00:16:33.964 }, 00:16:33.964 "ns_data": { 00:16:33.964 "can_share": true, 00:16:33.964 "id": 1 00:16:33.964 }, 00:16:33.964 "trid": { 00:16:33.964 "adrfam": "IPv4", 00:16:33.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.964 "traddr": "10.0.0.2", 00:16:33.964 "trsvcid": "4420", 00:16:33.964 "trtype": "TCP" 00:16:33.964 }, 00:16:33.964 "vs": { 00:16:33.964 "nvme_version": "1.3" 00:16:33.964 } 00:16:33.964 } 00:16:33.964 ] 00:16:33.964 }, 00:16:33.964 "memory_domains": [ 00:16:33.964 { 00:16:33.964 "dma_device_id": "system", 00:16:33.964 "dma_device_type": 1 00:16:33.964 } 00:16:33.964 ], 00:16:33.964 "name": "Nvme0n1", 00:16:33.964 "num_blocks": 38912, 00:16:33.964 "product_name": "NVMe disk", 00:16:33.964 "supported_io_types": { 00:16:33.964 "abort": true, 00:16:33.964 "compare": true, 00:16:33.964 "compare_and_write": true, 00:16:33.964 "flush": true, 00:16:33.964 "nvme_admin": true, 00:16:33.964 "nvme_io": true, 00:16:33.964 "read": true, 00:16:33.964 "reset": true, 00:16:33.964 "unmap": true, 00:16:33.964 "write": true, 00:16:33.964 "write_zeroes": true 00:16:33.964 }, 00:16:33.964 "uuid": "0d02c0d1-2d5e-4f3b-89d8-a112aed798c2", 00:16:33.964 "zoned": false 00:16:33.964 } 00:16:33.964 ] 00:16:33.964 09:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73396 00:16:33.964 09:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:33.964 09:57:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:34.221 Running I/O for 10 seconds... 00:16:35.153 Latency(us) 00:16:35.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:35.153 Nvme0n1 : 1.00 11410.00 44.57 0.00 0.00 0.00 0.00 0.00 00:16:35.153 =================================================================================================================== 00:16:35.153 Total : 11410.00 44.57 0.00 0.00 0.00 0.00 0.00 00:16:35.153 00:16:36.086 09:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:36.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:36.086 Nvme0n1 : 2.00 11409.00 44.57 0.00 0.00 0.00 0.00 0.00 00:16:36.086 =================================================================================================================== 00:16:36.086 Total : 11409.00 44.57 0.00 0.00 0.00 0.00 0.00 00:16:36.086 00:16:36.343 true 00:16:36.343 09:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:36.343 09:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:36.601 09:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:36.601 09:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:36.601 09:57:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73396 00:16:37.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:37.194 Nvme0n1 : 3.00 11420.00 44.61 0.00 0.00 0.00 0.00 0.00 00:16:37.194 =================================================================================================================== 00:16:37.194 Total : 11420.00 44.61 0.00 0.00 0.00 0.00 0.00 00:16:37.194 00:16:38.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:38.143 Nvme0n1 : 4.00 11366.00 44.40 0.00 0.00 0.00 0.00 0.00 00:16:38.143 =================================================================================================================== 00:16:38.143 Total : 11366.00 44.40 0.00 0.00 0.00 0.00 0.00 00:16:38.143 00:16:39.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:39.076 Nvme0n1 : 5.00 11355.80 44.36 0.00 0.00 0.00 0.00 0.00 00:16:39.076 =================================================================================================================== 00:16:39.076 Total : 11355.80 44.36 0.00 0.00 0.00 0.00 0.00 00:16:39.076 00:16:40.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:40.021 Nvme0n1 : 6.00 11200.17 43.75 0.00 0.00 0.00 0.00 0.00 00:16:40.021 =================================================================================================================== 00:16:40.021 Total : 11200.17 43.75 0.00 0.00 0.00 0.00 0.00 00:16:40.021 00:16:41.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.408 Nvme0n1 : 7.00 11369.43 44.41 0.00 0.00 0.00 0.00 0.00 00:16:41.408 =================================================================================================================== 00:16:41.408 Total : 11369.43 44.41 0.00 0.00 0.00 0.00 0.00 00:16:41.408 00:16:42.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.343 Nvme0n1 : 8.00 11308.50 44.17 0.00 0.00 0.00 0.00 0.00 00:16:42.343 =================================================================================================================== 00:16:42.343 Total : 11308.50 44.17 0.00 0.00 0.00 0.00 0.00 00:16:42.343 00:16:43.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.275 Nvme0n1 : 9.00 11347.67 44.33 0.00 0.00 0.00 0.00 0.00 00:16:43.275 =================================================================================================================== 00:16:43.275 Total : 11347.67 44.33 0.00 0.00 0.00 0.00 0.00 00:16:43.275 00:16:44.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.208 Nvme0n1 : 10.00 11406.10 44.56 0.00 0.00 0.00 0.00 0.00 00:16:44.208 =================================================================================================================== 00:16:44.208 Total : 11406.10 44.56 0.00 0.00 0.00 0.00 0.00 00:16:44.208 00:16:44.208 00:16:44.208 Latency(us) 00:16:44.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.208 Nvme0n1 : 10.00 11415.10 44.59 0.00 0.00 11207.75 3807.33 189742.32 00:16:44.208 =================================================================================================================== 00:16:44.208 Total : 11415.10 44.59 0.00 0.00 11207.75 3807.33 189742.32 00:16:44.208 0 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73349 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 73349 ']' 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 73349 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73349 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:44.208 killing process with pid 73349 00:16:44.208 Received shutdown signal, test time was about 10.000000 seconds 00:16:44.208 00:16:44.208 Latency(us) 00:16:44.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:44.208 =================================================================================================================== 00:16:44.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73349' 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 73349 00:16:44.208 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 73349 00:16:44.476 09:57:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:44.734 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:44.991 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:44.991 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:45.248 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:45.248 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:45.248 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:45.506 [2024-05-15 09:57:22.825689] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:45.506 09:57:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:46.070 2024/05/15 09:57:23 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:16:46.070 request: 00:16:46.070 { 00:16:46.070 "method": "bdev_lvol_get_lvstores", 00:16:46.070 "params": { 00:16:46.070 "uuid": "8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9" 00:16:46.070 } 00:16:46.070 } 00:16:46.070 Got JSON-RPC error response 00:16:46.070 GoRPCClient: error on JSON-RPC call 00:16:46.070 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:16:46.070 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:46.071 aio_bdev 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0d02c0d1-2d5e-4f3b-89d8-a112aed798c2 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=0d02c0d1-2d5e-4f3b-89d8-a112aed798c2 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:16:46.071 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:46.329 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0d02c0d1-2d5e-4f3b-89d8-a112aed798c2 -t 2000 00:16:46.589 [ 00:16:46.589 { 00:16:46.589 "aliases": [ 00:16:46.589 "lvs/lvol" 00:16:46.589 ], 00:16:46.589 "assigned_rate_limits": { 00:16:46.589 "r_mbytes_per_sec": 0, 00:16:46.589 "rw_ios_per_sec": 0, 00:16:46.589 "rw_mbytes_per_sec": 0, 00:16:46.589 "w_mbytes_per_sec": 0 00:16:46.589 }, 00:16:46.589 "block_size": 4096, 00:16:46.589 "claimed": false, 00:16:46.589 "driver_specific": { 00:16:46.589 "lvol": { 00:16:46.589 "base_bdev": "aio_bdev", 00:16:46.589 "clone": false, 00:16:46.589 "esnap_clone": false, 00:16:46.589 "lvol_store_uuid": "8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9", 00:16:46.589 "num_allocated_clusters": 38, 00:16:46.589 "snapshot": false, 00:16:46.589 "thin_provision": false 00:16:46.589 } 00:16:46.589 }, 00:16:46.589 "name": "0d02c0d1-2d5e-4f3b-89d8-a112aed798c2", 00:16:46.589 "num_blocks": 38912, 00:16:46.589 "product_name": "Logical Volume", 00:16:46.589 "supported_io_types": { 00:16:46.589 "abort": false, 00:16:46.589 "compare": false, 00:16:46.589 "compare_and_write": false, 00:16:46.589 "flush": false, 00:16:46.589 "nvme_admin": false, 00:16:46.589 "nvme_io": false, 00:16:46.589 "read": true, 00:16:46.589 "reset": true, 00:16:46.589 "unmap": true, 00:16:46.589 "write": true, 00:16:46.589 "write_zeroes": true 00:16:46.589 }, 00:16:46.589 "uuid": "0d02c0d1-2d5e-4f3b-89d8-a112aed798c2", 00:16:46.589 "zoned": false 00:16:46.589 } 00:16:46.589 ] 00:16:46.589 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:16:46.589 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:46.589 09:57:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:47.212 09:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:47.212 09:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:47.212 09:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:47.212 09:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:47.213 09:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0d02c0d1-2d5e-4f3b-89d8-a112aed798c2 00:16:47.470 09:57:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b14ea32-8e76-4b48-bca3-8ff13d8bc3e9 00:16:48.038 09:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:48.296 09:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:48.863 ************************************ 00:16:48.863 END TEST lvs_grow_clean 00:16:48.863 ************************************ 00:16:48.863 00:16:48.863 real 0m19.272s 00:16:48.863 user 0m17.330s 00:16:48.863 sys 0m3.158s 00:16:48.863 09:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:48.863 09:57:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:48.863 ************************************ 00:16:48.863 START TEST lvs_grow_dirty 00:16:48.863 ************************************ 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:48.863 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:49.121 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:49.121 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:49.687 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:16:49.687 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:16:49.687 09:57:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:49.687 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:49.687 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:49.687 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a lvol 150 00:16:50.285 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a5d38819-677c-446f-9ef7-65dd93741e2a 00:16:50.285 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:16:50.285 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:50.285 [2024-05-15 09:57:27.570004] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:50.285 [2024-05-15 09:57:27.570112] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:50.285 true 00:16:50.285 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:16:50.285 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:50.543 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:50.543 09:57:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:50.801 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a5d38819-677c-446f-9ef7-65dd93741e2a 00:16:51.059 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:51.318 [2024-05-15 09:57:28.618611] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.318 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:51.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.576 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73796 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73796 /var/tmp/bdevperf.sock 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 73796 ']' 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:51.577 09:57:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:51.834 [2024-05-15 09:57:29.008301] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:16:51.834 [2024-05-15 09:57:29.008985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73796 ] 00:16:51.834 [2024-05-15 09:57:29.158995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.092 [2024-05-15 09:57:29.330876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.696 09:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:52.696 09:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:16:52.696 09:57:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:52.954 Nvme0n1 00:16:52.954 09:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:53.211 [ 00:16:53.211 { 00:16:53.211 "aliases": [ 00:16:53.211 "a5d38819-677c-446f-9ef7-65dd93741e2a" 00:16:53.211 ], 00:16:53.211 "assigned_rate_limits": { 00:16:53.211 "r_mbytes_per_sec": 0, 00:16:53.211 "rw_ios_per_sec": 0, 00:16:53.211 "rw_mbytes_per_sec": 0, 00:16:53.211 "w_mbytes_per_sec": 0 00:16:53.211 }, 00:16:53.211 "block_size": 4096, 00:16:53.211 "claimed": false, 00:16:53.211 "driver_specific": { 00:16:53.211 "mp_policy": "active_passive", 00:16:53.211 "nvme": [ 00:16:53.211 { 00:16:53.211 "ctrlr_data": { 00:16:53.211 "ana_reporting": false, 00:16:53.211 "cntlid": 1, 00:16:53.211 "firmware_revision": "24.05", 00:16:53.211 "model_number": "SPDK bdev Controller", 00:16:53.211 "multi_ctrlr": true, 00:16:53.211 "oacs": { 00:16:53.211 "firmware": 0, 00:16:53.211 "format": 0, 00:16:53.211 "ns_manage": 0, 00:16:53.211 "security": 0 00:16:53.211 }, 00:16:53.211 "serial_number": "SPDK0", 00:16:53.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.211 "vendor_id": "0x8086" 00:16:53.211 }, 00:16:53.211 "ns_data": { 00:16:53.211 "can_share": true, 00:16:53.211 "id": 1 00:16:53.211 }, 00:16:53.211 "trid": { 00:16:53.211 "adrfam": "IPv4", 00:16:53.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.211 "traddr": "10.0.0.2", 00:16:53.211 "trsvcid": "4420", 00:16:53.211 "trtype": "TCP" 00:16:53.211 }, 00:16:53.211 "vs": { 00:16:53.211 "nvme_version": "1.3" 00:16:53.211 } 00:16:53.211 } 00:16:53.211 ] 00:16:53.211 }, 00:16:53.211 "memory_domains": [ 00:16:53.211 { 00:16:53.211 "dma_device_id": "system", 00:16:53.211 "dma_device_type": 1 00:16:53.211 } 00:16:53.211 ], 00:16:53.211 "name": "Nvme0n1", 00:16:53.211 "num_blocks": 38912, 00:16:53.211 "product_name": "NVMe disk", 00:16:53.211 "supported_io_types": { 00:16:53.211 "abort": true, 00:16:53.211 "compare": true, 00:16:53.211 "compare_and_write": true, 00:16:53.211 "flush": true, 00:16:53.211 "nvme_admin": true, 00:16:53.211 "nvme_io": true, 00:16:53.211 "read": true, 00:16:53.211 "reset": true, 00:16:53.211 "unmap": true, 00:16:53.211 "write": true, 00:16:53.211 "write_zeroes": true 00:16:53.211 }, 00:16:53.211 "uuid": "a5d38819-677c-446f-9ef7-65dd93741e2a", 00:16:53.211 "zoned": false 00:16:53.211 } 00:16:53.211 ] 00:16:53.211 09:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73844 00:16:53.211 09:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.211 09:57:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:53.469 Running I/O for 10 seconds... 00:16:54.402 Latency(us) 00:16:54.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:54.402 Nvme0n1 : 1.00 11797.00 46.08 0.00 0.00 0.00 0.00 0.00 00:16:54.402 =================================================================================================================== 00:16:54.402 Total : 11797.00 46.08 0.00 0.00 0.00 0.00 0.00 00:16:54.403 00:16:55.333 09:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:16:55.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:55.333 Nvme0n1 : 2.00 11833.00 46.22 0.00 0.00 0.00 0.00 0.00 00:16:55.333 =================================================================================================================== 00:16:55.333 Total : 11833.00 46.22 0.00 0.00 0.00 0.00 0.00 00:16:55.333 00:16:55.620 true 00:16:55.620 09:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:55.620 09:57:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:16:55.877 09:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:55.877 09:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:55.877 09:57:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 73844 00:16:56.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.438 Nvme0n1 : 3.00 11544.33 45.10 0.00 0.00 0.00 0.00 0.00 00:16:56.438 =================================================================================================================== 00:16:56.438 Total : 11544.33 45.10 0.00 0.00 0.00 0.00 0.00 00:16:56.438 00:16:57.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:57.370 Nvme0n1 : 4.00 11679.50 45.62 0.00 0.00 0.00 0.00 0.00 00:16:57.370 =================================================================================================================== 00:16:57.370 Total : 11679.50 45.62 0.00 0.00 0.00 0.00 0.00 00:16:57.370 00:16:58.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:58.305 Nvme0n1 : 5.00 11402.40 44.54 0.00 0.00 0.00 0.00 0.00 00:16:58.305 =================================================================================================================== 00:16:58.305 Total : 11402.40 44.54 0.00 0.00 0.00 0.00 0.00 00:16:58.305 00:16:59.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.679 Nvme0n1 : 6.00 11436.17 44.67 0.00 0.00 0.00 0.00 0.00 00:16:59.679 =================================================================================================================== 00:16:59.679 Total : 11436.17 44.67 0.00 0.00 0.00 0.00 0.00 00:16:59.679 00:17:00.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.613 Nvme0n1 : 7.00 11529.43 45.04 0.00 0.00 0.00 0.00 0.00 00:17:00.613 =================================================================================================================== 00:17:00.613 Total : 11529.43 45.04 0.00 0.00 0.00 0.00 0.00 00:17:00.613 00:17:01.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.546 Nvme0n1 : 8.00 11601.88 45.32 0.00 0.00 0.00 0.00 0.00 00:17:01.546 =================================================================================================================== 00:17:01.546 Total : 11601.88 45.32 0.00 0.00 0.00 0.00 0.00 00:17:01.546 00:17:02.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.479 Nvme0n1 : 9.00 11654.67 45.53 0.00 0.00 0.00 0.00 0.00 00:17:02.479 =================================================================================================================== 00:17:02.479 Total : 11654.67 45.53 0.00 0.00 0.00 0.00 0.00 00:17:02.479 00:17:03.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.412 Nvme0n1 : 10.00 11627.00 45.42 0.00 0.00 0.00 0.00 0.00 00:17:03.412 =================================================================================================================== 00:17:03.412 Total : 11627.00 45.42 0.00 0.00 0.00 0.00 0.00 00:17:03.412 00:17:03.412 00:17:03.412 Latency(us) 00:17:03.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.412 Nvme0n1 : 10.00 11635.19 45.45 0.00 0.00 10995.83 4556.31 148797.93 00:17:03.412 =================================================================================================================== 00:17:03.412 Total : 11635.19 45.45 0.00 0.00 10995.83 4556.31 148797.93 00:17:03.412 0 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73796 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 73796 ']' 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 73796 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 73796 00:17:03.412 killing process with pid 73796 00:17:03.412 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.412 00:17:03.412 Latency(us) 00:17:03.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.412 =================================================================================================================== 00:17:03.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 73796' 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 73796 00:17:03.412 09:57:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 73796 00:17:03.980 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:03.980 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:04.547 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:04.547 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73176 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73176 00:17:04.806 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73176 Killed "${NVMF_APP[@]}" "$@" 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:04.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74007 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74007 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 74007 ']' 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:04.806 09:57:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:04.806 [2024-05-15 09:57:42.050388] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:04.806 [2024-05-15 09:57:42.050793] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.063 [2024-05-15 09:57:42.198571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.063 [2024-05-15 09:57:42.367088] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.063 [2024-05-15 09:57:42.367401] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.063 [2024-05-15 09:57:42.367522] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.063 [2024-05-15 09:57:42.367680] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.063 [2024-05-15 09:57:42.367755] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.063 [2024-05-15 09:57:42.367872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.628 09:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:05.628 09:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:17:05.628 09:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.628 09:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:05.628 09:57:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:05.887 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.887 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:06.145 [2024-05-15 09:57:43.332982] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:06.145 [2024-05-15 09:57:43.333551] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:06.145 [2024-05-15 09:57:43.333870] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:06.145 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:06.145 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a5d38819-677c-446f-9ef7-65dd93741e2a 00:17:06.145 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=a5d38819-677c-446f-9ef7-65dd93741e2a 00:17:06.145 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:17:06.145 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:17:06.145 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:17:06.145 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:17:06.145 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:06.402 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a5d38819-677c-446f-9ef7-65dd93741e2a -t 2000 00:17:06.659 [ 00:17:06.659 { 00:17:06.659 "aliases": [ 00:17:06.659 "lvs/lvol" 00:17:06.659 ], 00:17:06.659 "assigned_rate_limits": { 00:17:06.659 "r_mbytes_per_sec": 0, 00:17:06.659 "rw_ios_per_sec": 0, 00:17:06.659 "rw_mbytes_per_sec": 0, 00:17:06.659 "w_mbytes_per_sec": 0 00:17:06.659 }, 00:17:06.659 "block_size": 4096, 00:17:06.659 "claimed": false, 00:17:06.659 "driver_specific": { 00:17:06.659 "lvol": { 00:17:06.659 "base_bdev": "aio_bdev", 00:17:06.659 "clone": false, 00:17:06.659 "esnap_clone": false, 00:17:06.659 "lvol_store_uuid": "a1cb3c22-7149-4307-b9c9-e5b52ef3e55a", 00:17:06.659 "num_allocated_clusters": 38, 00:17:06.659 "snapshot": false, 00:17:06.659 "thin_provision": false 00:17:06.659 } 00:17:06.659 }, 00:17:06.659 "name": "a5d38819-677c-446f-9ef7-65dd93741e2a", 00:17:06.659 "num_blocks": 38912, 00:17:06.659 "product_name": "Logical Volume", 00:17:06.659 "supported_io_types": { 00:17:06.659 "abort": false, 00:17:06.659 "compare": false, 00:17:06.659 "compare_and_write": false, 00:17:06.659 "flush": false, 00:17:06.659 "nvme_admin": false, 00:17:06.659 "nvme_io": false, 00:17:06.659 "read": true, 00:17:06.659 "reset": true, 00:17:06.659 "unmap": true, 00:17:06.659 "write": true, 00:17:06.659 "write_zeroes": true 00:17:06.659 }, 00:17:06.659 "uuid": "a5d38819-677c-446f-9ef7-65dd93741e2a", 00:17:06.659 "zoned": false 00:17:06.659 } 00:17:06.659 ] 00:17:06.659 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:17:06.659 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:06.659 09:57:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:06.916 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:06.917 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:06.917 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:07.174 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:07.174 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:07.431 [2024-05-15 09:57:44.629979] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:07.431 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:07.688 2024/05/15 09:57:44 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:a1cb3c22-7149-4307-b9c9-e5b52ef3e55a], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:17:07.689 request: 00:17:07.689 { 00:17:07.689 "method": "bdev_lvol_get_lvstores", 00:17:07.689 "params": { 00:17:07.689 "uuid": "a1cb3c22-7149-4307-b9c9-e5b52ef3e55a" 00:17:07.689 } 00:17:07.689 } 00:17:07.689 Got JSON-RPC error response 00:17:07.689 GoRPCClient: error on JSON-RPC call 00:17:07.689 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:17:07.689 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:07.689 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:07.689 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:07.689 09:57:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:07.946 aio_bdev 00:17:07.946 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a5d38819-677c-446f-9ef7-65dd93741e2a 00:17:07.946 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=a5d38819-677c-446f-9ef7-65dd93741e2a 00:17:07.946 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:17:07.946 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:17:07.946 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:17:07.946 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:17:07.946 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:08.510 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a5d38819-677c-446f-9ef7-65dd93741e2a -t 2000 00:17:08.510 [ 00:17:08.510 { 00:17:08.510 "aliases": [ 00:17:08.510 "lvs/lvol" 00:17:08.510 ], 00:17:08.510 "assigned_rate_limits": { 00:17:08.510 "r_mbytes_per_sec": 0, 00:17:08.510 "rw_ios_per_sec": 0, 00:17:08.510 "rw_mbytes_per_sec": 0, 00:17:08.510 "w_mbytes_per_sec": 0 00:17:08.510 }, 00:17:08.510 "block_size": 4096, 00:17:08.510 "claimed": false, 00:17:08.510 "driver_specific": { 00:17:08.510 "lvol": { 00:17:08.510 "base_bdev": "aio_bdev", 00:17:08.510 "clone": false, 00:17:08.510 "esnap_clone": false, 00:17:08.510 "lvol_store_uuid": "a1cb3c22-7149-4307-b9c9-e5b52ef3e55a", 00:17:08.510 "num_allocated_clusters": 38, 00:17:08.510 "snapshot": false, 00:17:08.510 "thin_provision": false 00:17:08.510 } 00:17:08.510 }, 00:17:08.510 "name": "a5d38819-677c-446f-9ef7-65dd93741e2a", 00:17:08.510 "num_blocks": 38912, 00:17:08.510 "product_name": "Logical Volume", 00:17:08.510 "supported_io_types": { 00:17:08.510 "abort": false, 00:17:08.510 "compare": false, 00:17:08.510 "compare_and_write": false, 00:17:08.510 "flush": false, 00:17:08.510 "nvme_admin": false, 00:17:08.510 "nvme_io": false, 00:17:08.510 "read": true, 00:17:08.510 "reset": true, 00:17:08.510 "unmap": true, 00:17:08.510 "write": true, 00:17:08.510 "write_zeroes": true 00:17:08.510 }, 00:17:08.510 "uuid": "a5d38819-677c-446f-9ef7-65dd93741e2a", 00:17:08.510 "zoned": false 00:17:08.510 } 00:17:08.510 ] 00:17:08.510 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:17:08.510 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:08.510 09:57:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:08.768 09:57:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:08.768 09:57:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:08.768 09:57:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:09.026 09:57:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:09.026 09:57:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a5d38819-677c-446f-9ef7-65dd93741e2a 00:17:09.592 09:57:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1cb3c22-7149-4307-b9c9-e5b52ef3e55a 00:17:09.850 09:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:10.107 09:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:10.672 ************************************ 00:17:10.672 END TEST lvs_grow_dirty 00:17:10.672 ************************************ 00:17:10.672 00:17:10.672 real 0m21.744s 00:17:10.672 user 0m46.448s 00:17:10.672 sys 0m10.365s 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:10.672 nvmf_trace.0 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:10.672 rmmod nvme_tcp 00:17:10.672 rmmod nvme_fabrics 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74007 ']' 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74007 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 74007 ']' 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 74007 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74007 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:10.672 killing process with pid 74007 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74007' 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 74007 00:17:10.672 09:57:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 74007 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:11.238 00:17:11.238 real 0m43.561s 00:17:11.238 user 1m10.564s 00:17:11.238 sys 0m14.361s 00:17:11.238 ************************************ 00:17:11.238 END TEST nvmf_lvs_grow 00:17:11.238 ************************************ 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:11.238 09:57:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:11.238 09:57:48 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:11.238 09:57:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:11.238 09:57:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:11.238 09:57:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.238 ************************************ 00:17:11.238 START TEST nvmf_bdev_io_wait 00:17:11.238 ************************************ 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:11.238 * Looking for test storage... 00:17:11.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.238 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:11.239 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:11.497 Cannot find device "nvmf_tgt_br" 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.497 Cannot find device "nvmf_tgt_br2" 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:11.497 Cannot find device "nvmf_tgt_br" 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:11.497 Cannot find device "nvmf_tgt_br2" 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:11.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:11.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:11.497 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:11.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:17:11.757 00:17:11.757 --- 10.0.0.2 ping statistics --- 00:17:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.757 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:11.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:11.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:11.757 00:17:11.757 --- 10.0.0.3 ping statistics --- 00:17:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.757 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:11.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:11.757 00:17:11.757 --- 10.0.0.1 ping statistics --- 00:17:11.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.757 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:17:11.757 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.758 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.758 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.758 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.758 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.758 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.758 09:57:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74427 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74427 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 74427 ']' 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:11.758 09:57:49 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:11.758 [2024-05-15 09:57:49.102782] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:11.758 [2024-05-15 09:57:49.102912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.015 [2024-05-15 09:57:49.251530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.272 [2024-05-15 09:57:49.430433] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.272 [2024-05-15 09:57:49.430523] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.272 [2024-05-15 09:57:49.430540] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.272 [2024-05-15 09:57:49.430554] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.273 [2024-05-15 09:57:49.430565] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.273 [2024-05-15 09:57:49.431239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.273 [2024-05-15 09:57:49.431342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.273 [2024-05-15 09:57:49.431463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.273 [2024-05-15 09:57:49.431468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:12.897 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.156 [2024-05-15 09:57:50.329682] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.156 Malloc0 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:13.156 [2024-05-15 09:57:50.399292] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:13.156 [2024-05-15 09:57:50.399919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74487 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:13.156 { 00:17:13.156 "params": { 00:17:13.156 "name": "Nvme$subsystem", 00:17:13.156 "trtype": "$TEST_TRANSPORT", 00:17:13.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.156 "adrfam": "ipv4", 00:17:13.156 "trsvcid": "$NVMF_PORT", 00:17:13.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.156 "hdgst": ${hdgst:-false}, 00:17:13.156 "ddgst": ${ddgst:-false} 00:17:13.156 }, 00:17:13.156 "method": "bdev_nvme_attach_controller" 00:17:13.156 } 00:17:13.156 EOF 00:17:13.156 )") 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74489 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:13.156 { 00:17:13.156 "params": { 00:17:13.156 "name": "Nvme$subsystem", 00:17:13.156 "trtype": "$TEST_TRANSPORT", 00:17:13.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.156 "adrfam": "ipv4", 00:17:13.156 "trsvcid": "$NVMF_PORT", 00:17:13.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.156 "hdgst": ${hdgst:-false}, 00:17:13.156 "ddgst": ${ddgst:-false} 00:17:13.156 }, 00:17:13.156 "method": "bdev_nvme_attach_controller" 00:17:13.156 } 00:17:13.156 EOF 00:17:13.156 )") 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74492 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74497 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:13.156 { 00:17:13.156 "params": { 00:17:13.156 "name": "Nvme$subsystem", 00:17:13.156 "trtype": "$TEST_TRANSPORT", 00:17:13.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.156 "adrfam": "ipv4", 00:17:13.156 "trsvcid": "$NVMF_PORT", 00:17:13.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.156 "hdgst": ${hdgst:-false}, 00:17:13.156 "ddgst": ${ddgst:-false} 00:17:13.156 }, 00:17:13.156 "method": "bdev_nvme_attach_controller" 00:17:13.156 } 00:17:13.156 EOF 00:17:13.156 )") 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:13.156 "params": { 00:17:13.156 "name": "Nvme1", 00:17:13.156 "trtype": "tcp", 00:17:13.156 "traddr": "10.0.0.2", 00:17:13.156 "adrfam": "ipv4", 00:17:13.156 "trsvcid": "4420", 00:17:13.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.156 "hdgst": false, 00:17:13.156 "ddgst": false 00:17:13.156 }, 00:17:13.156 "method": "bdev_nvme_attach_controller" 00:17:13.156 }' 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:13.156 "params": { 00:17:13.156 "name": "Nvme1", 00:17:13.156 "trtype": "tcp", 00:17:13.156 "traddr": "10.0.0.2", 00:17:13.156 "adrfam": "ipv4", 00:17:13.156 "trsvcid": "4420", 00:17:13.156 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.156 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.156 "hdgst": false, 00:17:13.156 "ddgst": false 00:17:13.156 }, 00:17:13.156 "method": "bdev_nvme_attach_controller" 00:17:13.156 }' 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:13.156 { 00:17:13.156 "params": { 00:17:13.156 "name": "Nvme$subsystem", 00:17:13.156 "trtype": "$TEST_TRANSPORT", 00:17:13.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.156 "adrfam": "ipv4", 00:17:13.156 "trsvcid": "$NVMF_PORT", 00:17:13.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.156 "hdgst": ${hdgst:-false}, 00:17:13.156 "ddgst": ${ddgst:-false} 00:17:13.156 }, 00:17:13.156 "method": "bdev_nvme_attach_controller" 00:17:13.156 } 00:17:13.156 EOF 00:17:13.156 )") 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:13.156 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:13.156 "params": { 00:17:13.157 "name": "Nvme1", 00:17:13.157 "trtype": "tcp", 00:17:13.157 "traddr": "10.0.0.2", 00:17:13.157 "adrfam": "ipv4", 00:17:13.157 "trsvcid": "4420", 00:17:13.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.157 "hdgst": false, 00:17:13.157 "ddgst": false 00:17:13.157 }, 00:17:13.157 "method": "bdev_nvme_attach_controller" 00:17:13.157 }' 00:17:13.157 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74487 00:17:13.157 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:13.157 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:13.157 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:13.157 09:57:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:13.157 "params": { 00:17:13.157 "name": "Nvme1", 00:17:13.157 "trtype": "tcp", 00:17:13.157 "traddr": "10.0.0.2", 00:17:13.157 "adrfam": "ipv4", 00:17:13.157 "trsvcid": "4420", 00:17:13.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.157 "hdgst": false, 00:17:13.157 "ddgst": false 00:17:13.157 }, 00:17:13.157 "method": "bdev_nvme_attach_controller" 00:17:13.157 }' 00:17:13.157 [2024-05-15 09:57:50.475972] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:13.157 [2024-05-15 09:57:50.476086] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:13.157 [2024-05-15 09:57:50.481744] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:13.157 [2024-05-15 09:57:50.481842] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:13.157 [2024-05-15 09:57:50.489130] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:13.157 [2024-05-15 09:57:50.489213] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:13.157 [2024-05-15 09:57:50.492415] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:13.157 [2024-05-15 09:57:50.492646] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:13.414 [2024-05-15 09:57:50.740189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.672 [2024-05-15 09:57:50.863044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.672 [2024-05-15 09:57:50.876377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:13.672 [2024-05-15 09:57:50.997303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:13.672 [2024-05-15 09:57:51.019140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.929 [2024-05-15 09:57:51.121474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.929 [2024-05-15 09:57:51.164502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:13.929 Running I/O for 1 seconds... 00:17:13.929 Running I/O for 1 seconds... 00:17:13.929 [2024-05-15 09:57:51.257259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:14.186 Running I/O for 1 seconds... 00:17:14.186 Running I/O for 1 seconds... 00:17:15.118 00:17:15.118 Latency(us) 00:17:15.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.118 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:15.118 Nvme1n1 : 1.00 186340.37 727.89 0.00 0.00 684.26 271.12 1178.09 00:17:15.118 =================================================================================================================== 00:17:15.118 Total : 186340.37 727.89 0.00 0.00 684.26 271.12 1178.09 00:17:15.118 00:17:15.118 Latency(us) 00:17:15.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.118 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:15.118 Nvme1n1 : 1.03 4219.67 16.48 0.00 0.00 29924.43 11671.65 59668.97 00:17:15.118 =================================================================================================================== 00:17:15.118 Total : 4219.67 16.48 0.00 0.00 29924.43 11671.65 59668.97 00:17:15.118 00:17:15.118 Latency(us) 00:17:15.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.118 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:15.118 Nvme1n1 : 1.01 3770.38 14.73 0.00 0.00 33749.92 11359.57 59419.31 00:17:15.118 =================================================================================================================== 00:17:15.118 Total : 3770.38 14.73 0.00 0.00 33749.92 11359.57 59419.31 00:17:15.118 00:17:15.118 Latency(us) 00:17:15.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.118 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:15.118 Nvme1n1 : 1.01 7504.15 29.31 0.00 0.00 16997.85 6023.07 31582.11 00:17:15.118 =================================================================================================================== 00:17:15.118 Total : 7504.15 29.31 0.00 0.00 16997.85 6023.07 31582.11 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74489 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74492 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74497 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.685 09:57:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.685 rmmod nvme_tcp 00:17:15.685 rmmod nvme_fabrics 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74427 ']' 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74427 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 74427 ']' 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 74427 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74427 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74427' 00:17:15.685 killing process with pid 74427 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 74427 00:17:15.685 [2024-05-15 09:57:53.040817] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:15.685 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 74427 00:17:16.275 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:16.275 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:16.275 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:16.275 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.275 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:16.275 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.276 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.276 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.276 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:16.276 00:17:16.276 real 0m5.025s 00:17:16.276 user 0m21.671s 00:17:16.276 sys 0m2.436s 00:17:16.276 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:16.276 ************************************ 00:17:16.276 END TEST nvmf_bdev_io_wait 00:17:16.276 ************************************ 00:17:16.276 09:57:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:16.276 09:57:53 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:16.276 09:57:53 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:16.276 09:57:53 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:16.276 09:57:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.276 ************************************ 00:17:16.276 START TEST nvmf_queue_depth 00:17:16.276 ************************************ 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:16.276 * Looking for test storage... 00:17:16.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.276 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:16.533 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:16.534 Cannot find device "nvmf_tgt_br" 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.534 Cannot find device "nvmf_tgt_br2" 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:16.534 Cannot find device "nvmf_tgt_br" 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:16.534 Cannot find device "nvmf_tgt_br2" 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.534 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:16.534 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:16.792 09:57:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:16.792 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:16.792 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.792 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:16.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:17:16.792 00:17:16.792 --- 10.0.0.2 ping statistics --- 00:17:16.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.792 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:16.792 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:16.792 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:16.792 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:16.792 00:17:16.792 --- 10.0.0.3 ping statistics --- 00:17:16.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.792 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:16.792 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:16.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:17:16.793 00:17:16.793 --- 10.0.0.1 ping statistics --- 00:17:16.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.793 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=74735 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 74735 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 74735 ']' 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:16.793 09:57:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:16.793 [2024-05-15 09:57:54.127951] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:16.793 [2024-05-15 09:57:54.128973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.050 [2024-05-15 09:57:54.276647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.309 [2024-05-15 09:57:54.465398] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.309 [2024-05-15 09:57:54.465599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.309 [2024-05-15 09:57:54.465763] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.309 [2024-05-15 09:57:54.465894] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.309 [2024-05-15 09:57:54.465961] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.309 [2024-05-15 09:57:54.466060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.874 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:17.874 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:17:17.874 09:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.874 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:17.874 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.132 [2024-05-15 09:57:55.284694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.132 Malloc0 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.132 [2024-05-15 09:57:55.358712] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:18.132 [2024-05-15 09:57:55.359214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.132 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=74785 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 74785 /var/tmp/bdevperf.sock 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 74785 ']' 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:18.133 09:57:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:18.133 [2024-05-15 09:57:55.413350] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:18.133 [2024-05-15 09:57:55.414264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74785 ] 00:17:18.391 [2024-05-15 09:57:55.552423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.391 [2024-05-15 09:57:55.716530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.325 09:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:19.325 09:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:17:19.325 09:57:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:19.325 09:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.325 09:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:19.325 NVMe0n1 00:17:19.325 09:57:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.325 09:57:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:19.583 Running I/O for 10 seconds... 00:17:29.549 00:17:29.549 Latency(us) 00:17:29.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.549 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:29.549 Verification LBA range: start 0x0 length 0x4000 00:17:29.549 NVMe0n1 : 10.06 9601.46 37.51 0.00 0.00 106162.07 14792.41 104358.28 00:17:29.549 =================================================================================================================== 00:17:29.549 Total : 9601.46 37.51 0.00 0.00 106162.07 14792.41 104358.28 00:17:29.549 0 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 74785 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 74785 ']' 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 74785 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74785 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74785' 00:17:29.549 killing process with pid 74785 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 74785 00:17:29.549 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.549 00:17:29.549 Latency(us) 00:17:29.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.549 =================================================================================================================== 00:17:29.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.549 09:58:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 74785 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.115 rmmod nvme_tcp 00:17:30.115 rmmod nvme_fabrics 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 74735 ']' 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 74735 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 74735 ']' 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 74735 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 74735 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 74735' 00:17:30.115 killing process with pid 74735 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 74735 00:17:30.115 [2024-05-15 09:58:07.394073] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:30.115 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 74735 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:30.682 ************************************ 00:17:30.682 END TEST nvmf_queue_depth 00:17:30.682 ************************************ 00:17:30.682 00:17:30.682 real 0m14.328s 00:17:30.682 user 0m24.448s 00:17:30.682 sys 0m2.459s 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:30.682 09:58:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:30.682 09:58:07 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:30.682 09:58:07 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:30.682 09:58:07 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:30.682 09:58:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:30.682 ************************************ 00:17:30.682 START TEST nvmf_target_multipath 00:17:30.682 ************************************ 00:17:30.682 09:58:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:30.682 * Looking for test storage... 00:17:30.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.682 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.940 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:30.940 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:30.940 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:30.940 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:30.940 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:30.940 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:30.940 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.940 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:30.941 Cannot find device "nvmf_tgt_br" 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:30.941 Cannot find device "nvmf_tgt_br2" 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:30.941 Cannot find device "nvmf_tgt_br" 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:30.941 Cannot find device "nvmf_tgt_br2" 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.941 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:30.941 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:31.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:17:31.211 00:17:31.211 --- 10.0.0.2 ping statistics --- 00:17:31.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.211 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:31.211 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:31.211 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:31.211 00:17:31.211 --- 10.0.0.3 ping statistics --- 00:17:31.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.211 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:31.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:31.211 00:17:31.211 --- 10.0.0.1 ping statistics --- 00:17:31.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.211 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75123 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75123 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@828 -- # '[' -z 75123 ']' 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:31.211 09:58:08 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:31.211 [2024-05-15 09:58:08.568589] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:31.212 [2024-05-15 09:58:08.569460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.470 [2024-05-15 09:58:08.711170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.728 [2024-05-15 09:58:08.888449] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.728 [2024-05-15 09:58:08.889036] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.728 [2024-05-15 09:58:08.889334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.728 [2024-05-15 09:58:08.889823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.728 [2024-05-15 09:58:08.890019] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.728 [2024-05-15 09:58:08.890379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.728 [2024-05-15 09:58:08.890488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.728 [2024-05-15 09:58:08.890541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.728 [2024-05-15 09:58:08.890545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.293 09:58:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:32.293 09:58:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@861 -- # return 0 00:17:32.293 09:58:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:32.293 09:58:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:32.293 09:58:09 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:32.293 09:58:09 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.293 09:58:09 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:32.870 [2024-05-15 09:58:09.960891] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.871 09:58:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:33.129 Malloc0 00:17:33.129 09:58:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:17:33.387 09:58:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:33.644 09:58:10 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.903 [2024-05-15 09:58:11.181466] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:33.903 [2024-05-15 09:58:11.182604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.903 09:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:34.161 [2024-05-15 09:58:11.450059] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:34.161 09:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:17:34.418 09:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:17:34.676 09:58:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:17:34.676 09:58:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local i=0 00:17:34.676 09:58:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.676 09:58:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:17:34.676 09:58:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # sleep 2 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # return 0 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:36.575 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75266 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:17:36.576 09:58:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:36.834 [global] 00:17:36.834 thread=1 00:17:36.834 invalidate=1 00:17:36.834 rw=randrw 00:17:36.834 time_based=1 00:17:36.834 runtime=6 00:17:36.834 ioengine=libaio 00:17:36.834 direct=1 00:17:36.834 bs=4096 00:17:36.834 iodepth=128 00:17:36.834 norandommap=0 00:17:36.834 numjobs=1 00:17:36.834 00:17:36.834 verify_dump=1 00:17:36.834 verify_backlog=512 00:17:36.834 verify_state_save=0 00:17:36.834 do_verify=1 00:17:36.834 verify=crc32c-intel 00:17:36.834 [job0] 00:17:36.834 filename=/dev/nvme0n1 00:17:36.834 Could not set queue depth (nvme0n1) 00:17:36.834 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:36.834 fio-3.35 00:17:36.834 Starting 1 thread 00:17:37.768 09:58:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:38.025 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:38.284 09:58:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:39.219 09:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:39.219 09:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:39.219 09:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:39.219 09:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:39.477 09:58:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:40.043 09:58:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:40.976 09:58:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:40.976 09:58:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:40.976 09:58:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:40.976 09:58:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75266 00:17:42.921 00:17:42.921 job0: (groupid=0, jobs=1): err= 0: pid=75293: Wed May 15 09:58:20 2024 00:17:42.921 read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(260MiB/6005msec) 00:17:42.921 slat (usec): min=4, max=5906, avg=52.86, stdev=223.24 00:17:42.921 clat (usec): min=1033, max=18101, avg=7803.46, stdev=1292.43 00:17:42.921 lat (usec): min=1069, max=18111, avg=7856.32, stdev=1301.38 00:17:42.921 clat percentiles (usec): 00:17:42.921 | 1.00th=[ 4621], 5.00th=[ 6063], 10.00th=[ 6652], 20.00th=[ 7046], 00:17:42.921 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7832], 00:17:42.921 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10421], 00:17:42.921 | 99.00th=[11863], 99.50th=[12387], 99.90th=[14353], 99.95th=[15139], 00:17:42.921 | 99.99th=[17171] 00:17:42.921 bw ( KiB/s): min=11704, max=28416, per=51.70%, avg=22955.64, stdev=6395.84, samples=11 00:17:42.921 iops : min= 2926, max= 7104, avg=5738.91, stdev=1598.96, samples=11 00:17:42.921 write: IOPS=6630, BW=25.9MiB/s (27.2MB/s)(136MiB/5253msec); 0 zone resets 00:17:42.921 slat (usec): min=7, max=2302, avg=60.18, stdev=159.24 00:17:42.921 clat (usec): min=702, max=15776, avg=6935.60, stdev=1078.49 00:17:42.921 lat (usec): min=1045, max=15796, avg=6995.78, stdev=1082.03 00:17:42.921 clat percentiles (usec): 00:17:42.921 | 1.00th=[ 3884], 5.00th=[ 5211], 10.00th=[ 5932], 20.00th=[ 6325], 00:17:42.921 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7111], 00:17:42.921 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 7963], 95.00th=[ 8848], 00:17:42.921 | 99.00th=[10290], 99.50th=[10683], 99.90th=[12256], 99.95th=[13304], 00:17:42.921 | 99.99th=[15401] 00:17:42.921 bw ( KiB/s): min=12288, max=28848, per=86.70%, avg=22994.91, stdev=6123.36, samples=11 00:17:42.921 iops : min= 3072, max= 7212, avg=5748.73, stdev=1530.84, samples=11 00:17:42.921 lat (usec) : 750=0.01% 00:17:42.921 lat (msec) : 2=0.02%, 4=0.57%, 10=94.74%, 20=4.67% 00:17:42.921 cpu : usr=6.04%, sys=22.26%, ctx=8144, majf=0, minf=121 00:17:42.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:42.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:42.921 issued rwts: total=66651,34828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:42.921 00:17:42.921 Run status group 0 (all jobs): 00:17:42.921 READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=260MiB (273MB), run=6005-6005msec 00:17:42.921 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=136MiB (143MB), run=5253-5253msec 00:17:42.921 00:17:42.921 Disk stats (read/write): 00:17:42.921 nvme0n1: ios=65859/33958, merge=0/0, ticks=481028/221475, in_queue=702503, util=98.62% 00:17:42.921 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:43.488 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:43.746 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:17:43.746 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:17:43.746 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:17:43.747 09:58:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:44.679 09:58:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:44.679 09:58:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:44.679 09:58:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:17:44.679 09:58:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:17:44.679 09:58:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75419 00:17:44.679 09:58:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:17:44.679 09:58:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:17:44.679 [global] 00:17:44.679 thread=1 00:17:44.679 invalidate=1 00:17:44.679 rw=randrw 00:17:44.679 time_based=1 00:17:44.679 runtime=6 00:17:44.679 ioengine=libaio 00:17:44.679 direct=1 00:17:44.679 bs=4096 00:17:44.679 iodepth=128 00:17:44.679 norandommap=0 00:17:44.679 numjobs=1 00:17:44.679 00:17:44.679 verify_dump=1 00:17:44.679 verify_backlog=512 00:17:44.679 verify_state_save=0 00:17:44.679 do_verify=1 00:17:44.679 verify=crc32c-intel 00:17:44.679 [job0] 00:17:44.679 filename=/dev/nvme0n1 00:17:44.679 Could not set queue depth (nvme0n1) 00:17:44.679 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:44.679 fio-3.35 00:17:44.679 Starting 1 thread 00:17:45.613 09:58:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:45.871 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:46.436 09:58:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:47.369 09:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:47.369 09:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:47.369 09:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:47.369 09:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:47.626 09:58:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:48.191 09:58:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:17:49.125 09:58:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:17:49.125 09:58:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:17:49.125 09:58:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:17:49.125 09:58:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75419 00:17:51.074 00:17:51.074 job0: (groupid=0, jobs=1): err= 0: pid=75440: Wed May 15 09:58:28 2024 00:17:51.074 read: IOPS=12.3k, BW=48.0MiB/s (50.3MB/s)(288MiB/6004msec) 00:17:51.074 slat (usec): min=4, max=14441, avg=41.05, stdev=193.95 00:17:51.074 clat (usec): min=360, max=17117, avg=7179.28, stdev=1534.57 00:17:51.074 lat (usec): min=376, max=19308, avg=7220.33, stdev=1549.59 00:17:51.074 clat percentiles (usec): 00:17:51.074 | 1.00th=[ 2933], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 6128], 00:17:51.074 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7242], 60.00th=[ 7504], 00:17:51.074 | 70.00th=[ 7832], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9372], 00:17:51.074 | 99.00th=[11338], 99.50th=[11863], 99.90th=[13829], 99.95th=[16909], 00:17:51.074 | 99.99th=[17171] 00:17:51.074 bw ( KiB/s): min= 9744, max=39048, per=53.92%, avg=26491.64, stdev=8048.54, samples=11 00:17:51.074 iops : min= 2436, max= 9762, avg=6622.91, stdev=2012.14, samples=11 00:17:51.074 write: IOPS=7361, BW=28.8MiB/s (30.2MB/s)(150MiB/5215msec); 0 zone resets 00:17:51.074 slat (usec): min=7, max=6359, avg=48.92, stdev=138.65 00:17:51.074 clat (usec): min=956, max=16806, avg=6058.71, stdev=1582.17 00:17:51.074 lat (usec): min=992, max=16825, avg=6107.63, stdev=1594.84 00:17:51.074 clat percentiles (usec): 00:17:51.074 | 1.00th=[ 2278], 5.00th=[ 3294], 10.00th=[ 3785], 20.00th=[ 4555], 00:17:51.074 | 30.00th=[ 5342], 40.00th=[ 6128], 50.00th=[ 6456], 60.00th=[ 6718], 00:17:51.074 | 70.00th=[ 6980], 80.00th=[ 7242], 90.00th=[ 7570], 95.00th=[ 7832], 00:17:51.074 | 99.00th=[ 9896], 99.50th=[11076], 99.90th=[16581], 99.95th=[16712], 00:17:51.074 | 99.99th=[16712] 00:17:51.074 bw ( KiB/s): min=10472, max=38248, per=89.75%, avg=26427.64, stdev=7732.01, samples=11 00:17:51.075 iops : min= 2618, max= 9562, avg=6606.91, stdev=1933.00, samples=11 00:17:51.075 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:17:51.075 lat (msec) : 2=0.30%, 4=5.93%, 10=91.38%, 20=2.37% 00:17:51.075 cpu : usr=6.83%, sys=23.40%, ctx=9249, majf=0, minf=133 00:17:51.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:17:51.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:51.075 issued rwts: total=73749,38391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:51.075 00:17:51.075 Run status group 0 (all jobs): 00:17:51.075 READ: bw=48.0MiB/s (50.3MB/s), 48.0MiB/s-48.0MiB/s (50.3MB/s-50.3MB/s), io=288MiB (302MB), run=6004-6004msec 00:17:51.075 WRITE: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=150MiB (157MB), run=5215-5215msec 00:17:51.075 00:17:51.075 Disk stats (read/write): 00:17:51.075 nvme0n1: ios=72049/38391, merge=0/0, ticks=482601/215089, in_queue=697690, util=98.55% 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # local i=0 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1228 -- # return 0 00:17:51.075 09:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.332 09:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:17:51.332 09:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:17:51.332 09:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:17:51.332 09:58:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:17:51.332 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:51.332 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:51.590 rmmod nvme_tcp 00:17:51.590 rmmod nvme_fabrics 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75123 ']' 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75123 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@947 -- # '[' -z 75123 ']' 00:17:51.590 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # kill -0 75123 00:17:51.591 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # uname 00:17:51.591 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:51.591 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75123 00:17:51.591 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:51.591 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:51.591 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75123' 00:17:51.591 killing process with pid 75123 00:17:51.591 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # kill 75123 00:17:51.591 [2024-05-15 09:58:28.794900] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:51.591 09:58:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@971 -- # wait 75123 00:17:51.849 09:58:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:51.849 09:58:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:51.849 09:58:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:51.849 09:58:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.849 09:58:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:51.849 09:58:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.849 09:58:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.849 09:58:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.108 09:58:29 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:52.108 00:17:52.108 real 0m21.347s 00:17:52.108 user 1m21.909s 00:17:52.108 sys 0m7.996s 00:17:52.108 09:58:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:52.108 09:58:29 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:52.108 ************************************ 00:17:52.108 END TEST nvmf_target_multipath 00:17:52.108 ************************************ 00:17:52.108 09:58:29 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:52.108 09:58:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:52.108 09:58:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:52.108 09:58:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:52.108 ************************************ 00:17:52.108 START TEST nvmf_zcopy 00:17:52.108 ************************************ 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:52.108 * Looking for test storage... 00:17:52.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:52.108 Cannot find device "nvmf_tgt_br" 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.108 Cannot find device "nvmf_tgt_br2" 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:17:52.108 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:52.367 Cannot find device "nvmf_tgt_br" 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:52.367 Cannot find device "nvmf_tgt_br2" 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:52.367 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:52.368 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:52.368 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:52.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:17:52.626 00:17:52.626 --- 10.0.0.2 ping statistics --- 00:17:52.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.626 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:52.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:52.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:52.626 00:17:52.626 --- 10.0.0.3 ping statistics --- 00:17:52.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.626 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:52.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:17:52.626 00:17:52.626 --- 10.0.0.1 ping statistics --- 00:17:52.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.626 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=75725 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 75725 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 75725 ']' 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:52.626 09:58:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:52.626 [2024-05-15 09:58:29.904654] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:52.626 [2024-05-15 09:58:29.904758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.883 [2024-05-15 09:58:30.040441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.883 [2024-05-15 09:58:30.209361] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.883 [2024-05-15 09:58:30.209437] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.883 [2024-05-15 09:58:30.209450] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.883 [2024-05-15 09:58:30.209460] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.883 [2024-05-15 09:58:30.209485] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.883 [2024-05-15 09:58:30.209528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:53.818 [2024-05-15 09:58:30.932529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:53.818 [2024-05-15 09:58:30.956473] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:53.818 [2024-05-15 09:58:30.956965] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:53.818 malloc0 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:53.818 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.819 09:58:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:53.819 { 00:17:53.819 "params": { 00:17:53.819 "name": "Nvme$subsystem", 00:17:53.819 "trtype": "$TEST_TRANSPORT", 00:17:53.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:53.819 "adrfam": "ipv4", 00:17:53.819 "trsvcid": "$NVMF_PORT", 00:17:53.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:53.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:53.819 "hdgst": ${hdgst:-false}, 00:17:53.819 "ddgst": ${ddgst:-false} 00:17:53.819 }, 00:17:53.819 "method": "bdev_nvme_attach_controller" 00:17:53.819 } 00:17:53.819 EOF 00:17:53.819 )") 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:53.819 09:58:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:53.819 "params": { 00:17:53.819 "name": "Nvme1", 00:17:53.819 "trtype": "tcp", 00:17:53.819 "traddr": "10.0.0.2", 00:17:53.819 "adrfam": "ipv4", 00:17:53.819 "trsvcid": "4420", 00:17:53.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.819 "hdgst": false, 00:17:53.819 "ddgst": false 00:17:53.819 }, 00:17:53.819 "method": "bdev_nvme_attach_controller" 00:17:53.819 }' 00:17:53.819 [2024-05-15 09:58:31.066351] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:17:53.819 [2024-05-15 09:58:31.066820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75776 ] 00:17:54.076 [2024-05-15 09:58:31.219794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.076 [2024-05-15 09:58:31.402597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.333 Running I/O for 10 seconds... 00:18:04.300 00:18:04.300 Latency(us) 00:18:04.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.300 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:04.300 Verification LBA range: start 0x0 length 0x1000 00:18:04.300 Nvme1n1 : 10.01 6580.69 51.41 0.00 0.00 19390.13 947.93 32455.92 00:18:04.300 =================================================================================================================== 00:18:04.300 Total : 6580.69 51.41 0.00 0.00 19390.13 947.93 32455.92 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=75901 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:04.864 { 00:18:04.864 "params": { 00:18:04.864 "name": "Nvme$subsystem", 00:18:04.864 "trtype": "$TEST_TRANSPORT", 00:18:04.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:04.864 "adrfam": "ipv4", 00:18:04.864 "trsvcid": "$NVMF_PORT", 00:18:04.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:04.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:04.864 "hdgst": ${hdgst:-false}, 00:18:04.864 "ddgst": ${ddgst:-false} 00:18:04.864 }, 00:18:04.864 "method": "bdev_nvme_attach_controller" 00:18:04.864 } 00:18:04.864 EOF 00:18:04.864 )") 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:04.864 [2024-05-15 09:58:42.065656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.864 [2024-05-15 09:58:42.065838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:04.864 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:04.864 09:58:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:04.864 "params": { 00:18:04.864 "name": "Nvme1", 00:18:04.864 "trtype": "tcp", 00:18:04.864 "traddr": "10.0.0.2", 00:18:04.864 "adrfam": "ipv4", 00:18:04.864 "trsvcid": "4420", 00:18:04.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.864 "hdgst": false, 00:18:04.864 "ddgst": false 00:18:04.864 }, 00:18:04.864 "method": "bdev_nvme_attach_controller" 00:18:04.864 }' 00:18:04.864 [2024-05-15 09:58:42.077619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.864 [2024-05-15 09:58:42.077668] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.864 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.089579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.089630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.101574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.101620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.113550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.113587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.122838] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:18:04.865 [2024-05-15 09:58:42.122937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75901 ] 00:18:04.865 [2024-05-15 09:58:42.125547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.125580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.137575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.137617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.149560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.149595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.161572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.161607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.173584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.173623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.185565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.185604] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.197562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.197596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.209585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.209616] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.221560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.221587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:04.865 [2024-05-15 09:58:42.233575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:04.865 [2024-05-15 09:58:42.233605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.865 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.245586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.123 [2024-05-15 09:58:42.245622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.123 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.257588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.123 [2024-05-15 09:58:42.257620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.123 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.268707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.123 [2024-05-15 09:58:42.269620] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.123 [2024-05-15 09:58:42.269656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.123 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.281670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.123 [2024-05-15 09:58:42.281734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.123 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.293642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.123 [2024-05-15 09:58:42.293691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.123 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.305649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.123 [2024-05-15 09:58:42.305698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.123 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.317656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.123 [2024-05-15 09:58:42.317704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.123 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.329892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.123 [2024-05-15 09:58:42.330002] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.123 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.123 [2024-05-15 09:58:42.341661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.341710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.353631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.353665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.366028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.366085] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.377992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.378034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.390032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.390085] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.402027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.402068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.413994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.414026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.426020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.426063] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.438030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.438077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.450029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.450068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 [2024-05-15 09:58:42.451402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.462024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.462069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.474039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.474086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.486063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.486127] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.124 [2024-05-15 09:58:42.498055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.124 [2024-05-15 09:58:42.498110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.124 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.510042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.510079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.522044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.522079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.534067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.534121] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.546069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.546131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.558064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.558114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.570056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.570104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.582077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.582133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.590068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.590120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.602126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.602182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.614173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.614233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.626186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.626239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.638122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.638173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.650155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.650201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.662120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.662167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.674119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.674158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 Running I/O for 5 seconds... 00:18:05.459 [2024-05-15 09:58:42.686101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.686130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.697447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.697496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.459 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.459 [2024-05-15 09:58:42.714137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.459 [2024-05-15 09:58:42.714192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.460 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.460 [2024-05-15 09:58:42.731229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.460 [2024-05-15 09:58:42.731284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.460 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.460 [2024-05-15 09:58:42.751216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.460 [2024-05-15 09:58:42.751272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.460 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.460 [2024-05-15 09:58:42.768838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.460 [2024-05-15 09:58:42.768896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.460 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.460 [2024-05-15 09:58:42.780434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.460 [2024-05-15 09:58:42.780496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.460 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.460 [2024-05-15 09:58:42.789949] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.460 [2024-05-15 09:58:42.790006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.460 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.460 [2024-05-15 09:58:42.807613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.460 [2024-05-15 09:58:42.807677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.460 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.822050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.822122] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.839188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.839244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.849066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.849132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.859257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.859309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.873577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.873627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.889691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.889744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.901169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.901215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.917762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.917833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.932970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.933028] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.942122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.942171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.718 [2024-05-15 09:58:42.951503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.718 [2024-05-15 09:58:42.951551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.718 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:42.960981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:42.961026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:42.975643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:42.975692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:42.984967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:42.985013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:42 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:43.001137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:43.001194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:43.016994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:43.017050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:43.034430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:43.034487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:43.051859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:43.051915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:43.066686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:43.066753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:43.084193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:43.084258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.719 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.719 [2024-05-15 09:58:43.098272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.719 [2024-05-15 09:58:43.098324] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.114612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.114669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.131255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.131309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.142977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.143032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.159675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.159730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.174189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.174243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.191174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.191234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.200554] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.200603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.210181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.210230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.224902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.224954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.241712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.241767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.257996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.258059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.268063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.268129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.282682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.282753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.300490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.300550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.315320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.315378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.327024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.327076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:05.975 [2024-05-15 09:58:43.344864] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:05.975 [2024-05-15 09:58:43.344919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:05.975 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.360246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.360309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.376574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.376632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.387446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.387493] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.403628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.403679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.420161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.420211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.437000] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.437055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.452459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.452526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.464802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.464853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.480181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.480231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.496452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.496499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.511672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.511719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.522831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.522880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.539705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.539776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.553755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.553806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.568764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.568818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.586664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.586718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.233 [2024-05-15 09:58:43.601225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.233 [2024-05-15 09:58:43.601281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.233 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.617783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.617840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.629390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.629443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.646605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.646672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.660521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.660579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.670344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.670394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.684812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.684868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.694355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.694406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.712093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.712159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.726754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.726804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.491 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.491 [2024-05-15 09:58:43.735987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.491 [2024-05-15 09:58:43.736039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.492 [2024-05-15 09:58:43.749329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.492 [2024-05-15 09:58:43.749385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.492 [2024-05-15 09:58:43.764965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.492 [2024-05-15 09:58:43.765020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.492 [2024-05-15 09:58:43.781040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.492 [2024-05-15 09:58:43.781116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.492 [2024-05-15 09:58:43.792286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.492 [2024-05-15 09:58:43.792340] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.492 [2024-05-15 09:58:43.809607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.492 [2024-05-15 09:58:43.809661] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.492 [2024-05-15 09:58:43.824916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.492 [2024-05-15 09:58:43.824968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.492 [2024-05-15 09:58:43.842895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.492 [2024-05-15 09:58:43.842956] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.492 [2024-05-15 09:58:43.859008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.492 [2024-05-15 09:58:43.859080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.492 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:43.876430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:43.876489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.750 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:43.892648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:43.892703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.750 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:43.910756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:43.910814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.750 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:43.928018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:43.928072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.750 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:43.943683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:43.943738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.750 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:43.961323] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:43.961385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.750 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:43.977411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:43.977466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.750 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:43.994770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:43.994826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.750 2024/05/15 09:58:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.750 [2024-05-15 09:58:44.010600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.750 [2024-05-15 09:58:44.010663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.751 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.751 [2024-05-15 09:58:44.022800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.751 [2024-05-15 09:58:44.022850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.751 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.751 [2024-05-15 09:58:44.039322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.751 [2024-05-15 09:58:44.039376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.751 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.751 [2024-05-15 09:58:44.055266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.751 [2024-05-15 09:58:44.055317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.751 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.751 [2024-05-15 09:58:44.073026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.751 [2024-05-15 09:58:44.073082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.751 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.751 [2024-05-15 09:58:44.088362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.751 [2024-05-15 09:58:44.088413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.751 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.751 [2024-05-15 09:58:44.105822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.751 [2024-05-15 09:58:44.106131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.751 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:06.751 [2024-05-15 09:58:44.121181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:06.751 [2024-05-15 09:58:44.121450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:06.751 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.010 [2024-05-15 09:58:44.137276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.137517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.154496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.154785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.172688] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.172947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.188197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.188439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.206192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.206417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.220927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.221176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.237123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.237392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.253293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.253573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.269363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.269599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.280276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.280479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.296161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.296390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.316430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.316689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.329137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.329395] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.345408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.345687] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.362177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.362443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.011 [2024-05-15 09:58:44.378643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.011 [2024-05-15 09:58:44.378875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.011 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.396031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.396277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.410974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.411239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.428161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.428209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.444459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.444517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.461665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.461716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.477755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.477810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.494325] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.494382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.505884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.505948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.522304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.522356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.538977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.539029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.556201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.556253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.572997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.573050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.589191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.589247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.605628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.605684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.620613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.620663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.270 [2024-05-15 09:58:44.637541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.270 [2024-05-15 09:58:44.637612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.270 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.528 [2024-05-15 09:58:44.653477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.528 [2024-05-15 09:58:44.653526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.528 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.528 [2024-05-15 09:58:44.669718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.528 [2024-05-15 09:58:44.669767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.528 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.528 [2024-05-15 09:58:44.685401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.528 [2024-05-15 09:58:44.685453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.528 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.528 [2024-05-15 09:58:44.700896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.528 [2024-05-15 09:58:44.700946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.528 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.528 [2024-05-15 09:58:44.716567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.528 [2024-05-15 09:58:44.716624] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.528 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.528 [2024-05-15 09:58:44.732756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.528 [2024-05-15 09:58:44.732809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.528 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.528 [2024-05-15 09:58:44.749035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.528 [2024-05-15 09:58:44.749100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.528 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.528 [2024-05-15 09:58:44.760526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.760575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.775894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.775942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.795435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.795488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.811259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.811312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.827791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.827842] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.844057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.844120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.859491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.859543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.871244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.871288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.887580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.887628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.529 [2024-05-15 09:58:44.903745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.529 [2024-05-15 09:58:44.903795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.529 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.786 [2024-05-15 09:58:44.915938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.786 [2024-05-15 09:58:44.915987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.786 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.786 [2024-05-15 09:58:44.931009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.786 [2024-05-15 09:58:44.931060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.786 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:44.942880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:44.942934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:44.957942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:44.957990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:44.973849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:44.973898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:44.989490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:44.989551] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.004866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.004910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.020674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.020723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.035776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.035828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.052848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.052910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.070250] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.070305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.086018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.086069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.098241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.098290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.113743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.113794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.129942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.129993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.145007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.145055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:07.787 [2024-05-15 09:58:45.157281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:07.787 [2024-05-15 09:58:45.157328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:07.787 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.173188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.173239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.189663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.189714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.207384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.207443] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.224413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.224489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.239498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.239552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.256515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.256572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.271509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.271558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.283741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.283793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.299186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.299234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.317076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.317164] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.331791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.331848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.351658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.351716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.368751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.368802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.385481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.385540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.045 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.045 [2024-05-15 09:58:45.396698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.045 [2024-05-15 09:58:45.396746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.046 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.046 [2024-05-15 09:58:45.414293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.046 [2024-05-15 09:58:45.414350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.046 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.430577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.430635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.443538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.443592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.454582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.454634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.463085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.463139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.474223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.474272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.485232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.485285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.502341] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.502405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.517377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.517430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.533818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.533868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.551039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.551086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.567809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.567861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.584467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.584517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.601129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.601182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.617984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.618043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.634602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.634663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.651777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.651840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.663061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.663120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.305 [2024-05-15 09:58:45.679735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.305 [2024-05-15 09:58:45.679812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.305 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.695672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.695728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.706767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.706817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.722650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.722702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.733511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.733567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.748905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.748953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.766153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.766199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.778086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.778146] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.789173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.789218] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.805911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.805957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.821519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.821592] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.836409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.836462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.852595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.852654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.868878] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.868934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.885357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.885410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.898349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.898410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.564 [2024-05-15 09:58:45.914890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.564 [2024-05-15 09:58:45.914942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.564 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.565 [2024-05-15 09:58:45.931810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.565 [2024-05-15 09:58:45.931867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.565 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:45.948382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:45.948453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:45.966263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:45.966317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:45.982382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:45.982435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:45 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:45.999225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:45.999279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.015974] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.016032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.032535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.032590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.048321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.048375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.063312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.063365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.078900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.078955] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.096981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.097053] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.111888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.111953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.128238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.128295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.144770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.144824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.160744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.160797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.177373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.177430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:08.823 [2024-05-15 09:58:46.193770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:08.823 [2024-05-15 09:58:46.193824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:08.823 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.082 [2024-05-15 09:58:46.210505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.082 [2024-05-15 09:58:46.210559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.082 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.082 [2024-05-15 09:58:46.225993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.082 [2024-05-15 09:58:46.226045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.082 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.082 [2024-05-15 09:58:46.240280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.082 [2024-05-15 09:58:46.240333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.082 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.082 [2024-05-15 09:58:46.255385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.082 [2024-05-15 09:58:46.255437] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.082 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.082 [2024-05-15 09:58:46.270868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.082 [2024-05-15 09:58:46.270923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.082 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.082 [2024-05-15 09:58:46.287473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.082 [2024-05-15 09:58:46.287757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.082 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.082 [2024-05-15 09:58:46.304007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.304266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.320332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.320548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.336778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.337040] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.354303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.354579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.370059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.370309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.384765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.384980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.393868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.394044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.407206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.407488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.423294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.423507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.439806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.440017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.083 [2024-05-15 09:58:46.456815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.083 [2024-05-15 09:58:46.457025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.083 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.473890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.474127] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.490008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.490242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.501807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.502039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.517224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.517440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.533200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.533403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.546692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.546919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.564018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.564366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.578284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.578534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.599749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.599986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.615896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.616149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.632170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.632407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.342 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.342 [2024-05-15 09:58:46.649219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.342 [2024-05-15 09:58:46.649469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.343 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.343 [2024-05-15 09:58:46.664102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.343 [2024-05-15 09:58:46.666169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.343 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.343 [2024-05-15 09:58:46.683319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.343 [2024-05-15 09:58:46.683523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.343 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.343 [2024-05-15 09:58:46.698348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.343 [2024-05-15 09:58:46.698633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.343 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.343 [2024-05-15 09:58:46.714365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.343 [2024-05-15 09:58:46.714571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.343 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.659 [2024-05-15 09:58:46.732344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.659 [2024-05-15 09:58:46.732591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.659 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.659 [2024-05-15 09:58:46.753147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.659 [2024-05-15 09:58:46.753362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.659 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.659 [2024-05-15 09:58:46.768613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.659 [2024-05-15 09:58:46.768802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.659 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.659 [2024-05-15 09:58:46.779748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.659 [2024-05-15 09:58:46.780012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.659 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.659 [2024-05-15 09:58:46.796020] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.659 [2024-05-15 09:58:46.796278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.659 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.659 [2024-05-15 09:58:46.811938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.659 [2024-05-15 09:58:46.812155] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.659 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.659 [2024-05-15 09:58:46.821644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.659 [2024-05-15 09:58:46.821834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.835998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.836193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.853955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.854194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.868555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.868771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.884857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.885057] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.900278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.900485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.918016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.918249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.932740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.932934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.949440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.949685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.965853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.966058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.983986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.984229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:46 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:46.997770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:46.997968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:47.015463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:47.015665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.660 [2024-05-15 09:58:47.030070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.660 [2024-05-15 09:58:47.030299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.660 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.045135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.045344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.062866] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.063086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.078499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.078706] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.088278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.088472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.103669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.103939] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.113781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.113999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.128697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.128919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.145641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.145913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.162155] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.162499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.179269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.179589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.195980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.196314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.207496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.207791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.223425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.223707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.238998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.239277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.253726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.253966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.269462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.269741] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.279596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.279897] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:09.920 [2024-05-15 09:58:47.294196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:09.920 [2024-05-15 09:58:47.294421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:09.920 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.310263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.310514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.328205] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.328466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.343074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.343302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.359247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.359449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.377289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.377533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.391830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.392020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.409712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.409884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.425130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.425296] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.434569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.434715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.450958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.451142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.468040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.468241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.485824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.486011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.178 [2024-05-15 09:58:47.502272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.178 [2024-05-15 09:58:47.502466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.178 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.179 [2024-05-15 09:58:47.519464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.179 [2024-05-15 09:58:47.519651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.179 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.179 [2024-05-15 09:58:47.534926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.179 [2024-05-15 09:58:47.535219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.179 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.179 [2024-05-15 09:58:47.544562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.179 [2024-05-15 09:58:47.544745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.179 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.559921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.560107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.576484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.576662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.593505] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.593717] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.603431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.603595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.617984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.618182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.634345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.634522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.655280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.655521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.673622] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.673832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.688121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.688307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 00:18:10.439 Latency(us) 00:18:10.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.439 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:10.439 Nvme1n1 : 5.01 13238.35 103.42 0.00 0.00 9657.82 4150.61 19099.06 00:18:10.439 =================================================================================================================== 00:18:10.439 Total : 13238.35 103.42 0.00 0.00 9657.82 4150.61 19099.06 00:18:10.439 [2024-05-15 09:58:47.699297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.699466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.711302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.711490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.723315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.723447] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.735295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.735455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.747320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.747515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.759309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.759480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.771312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.771503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.783347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.783528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.439 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.439 [2024-05-15 09:58:47.795324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.439 [2024-05-15 09:58:47.795501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.440 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.440 [2024-05-15 09:58:47.807343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.440 [2024-05-15 09:58:47.807511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.440 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.440 [2024-05-15 09:58:47.819336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.440 [2024-05-15 09:58:47.819521] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.831340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.831539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.843340] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.843509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.855354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.855530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.867370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.867584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.879392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.879628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.891379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.891576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.903359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.903509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.915357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.915493] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.927353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.927485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.939350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.939472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.951368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.951537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.963372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.963538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.975386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.975596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.987385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.987573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:47 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:47.999402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:47.999541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:48.011373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:48.011494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:48.023381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:48.023515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:48.035392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:48.035548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:48.047393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:48.047535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:48.059386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:48.059507] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.698 2024/05/15 09:58:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.698 [2024-05-15 09:58:48.071382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.698 [2024-05-15 09:58:48.071499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.699 2024/05/15 09:58:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.956 [2024-05-15 09:58:48.083382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:10.956 [2024-05-15 09:58:48.083503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:10.956 2024/05/15 09:58:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:10.956 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75901) - No such process 00:18:10.956 09:58:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 75901 00:18:10.956 09:58:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.956 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.956 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.956 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.956 09:58:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:10.957 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.957 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.957 delay0 00:18:10.957 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.957 09:58:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:10.957 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.957 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:10.957 09:58:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.957 09:58:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:10.957 [2024-05-15 09:58:48.283243] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:17.517 Initializing NVMe Controllers 00:18:17.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:17.517 Initialization complete. Launching workers. 00:18:17.517 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 70 00:18:17.517 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 357, failed to submit 33 00:18:17.517 success 162, unsuccess 195, failed 0 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.517 rmmod nvme_tcp 00:18:17.517 rmmod nvme_fabrics 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 75725 ']' 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 75725 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 75725 ']' 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 75725 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 75725 00:18:17.517 killing process with pid 75725 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 75725' 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 75725 00:18:17.517 [2024-05-15 09:58:54.667956] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:17.517 09:58:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 75725 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:17.775 ************************************ 00:18:17.775 END TEST nvmf_zcopy 00:18:17.775 ************************************ 00:18:17.775 00:18:17.775 real 0m25.820s 00:18:17.775 user 0m40.576s 00:18:17.775 sys 0m7.931s 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:17.775 09:58:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:18.034 09:58:55 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:18.034 09:58:55 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:18.034 09:58:55 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:18.034 09:58:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:18.034 ************************************ 00:18:18.034 START TEST nvmf_nmic 00:18:18.034 ************************************ 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:18.034 * Looking for test storage... 00:18:18.034 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.034 09:58:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:18.035 Cannot find device "nvmf_tgt_br" 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.035 Cannot find device "nvmf_tgt_br2" 00:18:18.035 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:18.294 Cannot find device "nvmf_tgt_br" 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:18.294 Cannot find device "nvmf_tgt_br2" 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:18.294 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:18.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:18:18.553 00:18:18.553 --- 10.0.0.2 ping statistics --- 00:18:18.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.553 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:18.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:18.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:18:18.553 00:18:18.553 --- 10.0.0.3 ping statistics --- 00:18:18.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.553 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:18:18.553 00:18:18.553 --- 10.0.0.1 ping statistics --- 00:18:18.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.553 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:18.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76232 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76232 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 76232 ']' 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:18.553 09:58:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:18.553 [2024-05-15 09:58:55.849717] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:18:18.553 [2024-05-15 09:58:55.850132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.853 [2024-05-15 09:58:56.015887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:18.853 [2024-05-15 09:58:56.189431] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.853 [2024-05-15 09:58:56.189727] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.853 [2024-05-15 09:58:56.189856] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.853 [2024-05-15 09:58:56.189917] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.853 [2024-05-15 09:58:56.189967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.853 [2024-05-15 09:58:56.190157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.853 [2024-05-15 09:58:56.190230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.853 [2024-05-15 09:58:56.190950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:18.853 [2024-05-15 09:58:56.190954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.802 09:58:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:19.802 09:58:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:18:19.802 09:58:56 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:19.802 09:58:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:19.802 09:58:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.802 [2024-05-15 09:58:57.036649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.802 Malloc0 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.802 [2024-05-15 09:58:57.117440] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:19.802 [2024-05-15 09:58:57.118288] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.802 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:19.803 test case1: single bdev can't be used in multiple subsystems 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.803 [2024-05-15 09:58:57.149545] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:19.803 [2024-05-15 09:58:57.149761] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:19.803 [2024-05-15 09:58:57.149895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:19.803 2024/05/15 09:58:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:19.803 request: 00:18:19.803 { 00:18:19.803 "method": "nvmf_subsystem_add_ns", 00:18:19.803 "params": { 00:18:19.803 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:19.803 "namespace": { 00:18:19.803 "bdev_name": "Malloc0", 00:18:19.803 "no_auto_visible": false 00:18:19.803 } 00:18:19.803 } 00:18:19.803 } 00:18:19.803 Got JSON-RPC error response 00:18:19.803 GoRPCClient: error on JSON-RPC call 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:19.803 Adding namespace failed - expected result. 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:19.803 test case2: host connect to nvmf target in multiple paths 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:19.803 [2024-05-15 09:58:57.165731] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.803 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:20.060 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:20.317 09:58:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:20.317 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:18:20.317 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.317 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:18:20.317 09:58:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:18:22.229 09:58:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:18:22.229 09:58:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:22.229 09:58:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:18:22.229 09:58:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:18:22.229 09:58:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.229 09:58:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:18:22.229 09:58:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:22.229 [global] 00:18:22.229 thread=1 00:18:22.229 invalidate=1 00:18:22.229 rw=write 00:18:22.229 time_based=1 00:18:22.229 runtime=1 00:18:22.229 ioengine=libaio 00:18:22.229 direct=1 00:18:22.229 bs=4096 00:18:22.229 iodepth=1 00:18:22.229 norandommap=0 00:18:22.229 numjobs=1 00:18:22.229 00:18:22.229 verify_dump=1 00:18:22.229 verify_backlog=512 00:18:22.229 verify_state_save=0 00:18:22.229 do_verify=1 00:18:22.229 verify=crc32c-intel 00:18:22.229 [job0] 00:18:22.229 filename=/dev/nvme0n1 00:18:22.229 Could not set queue depth (nvme0n1) 00:18:22.498 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:22.498 fio-3.35 00:18:22.498 Starting 1 thread 00:18:23.450 00:18:23.450 job0: (groupid=0, jobs=1): err= 0: pid=76342: Wed May 15 09:59:00 2024 00:18:23.450 read: IOPS=3490, BW=13.6MiB/s (14.3MB/s)(13.6MiB/1001msec) 00:18:23.450 slat (nsec): min=9325, max=45469, avg=11451.27, stdev=3142.48 00:18:23.450 clat (usec): min=92, max=642, avg=145.47, stdev=28.80 00:18:23.450 lat (usec): min=119, max=654, avg=156.92, stdev=29.26 00:18:23.450 clat percentiles (usec): 00:18:23.450 | 1.00th=[ 117], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:18:23.450 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 147], 00:18:23.450 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 180], 00:18:23.450 | 99.00th=[ 258], 99.50th=[ 343], 99.90th=[ 388], 99.95th=[ 412], 00:18:23.450 | 99.99th=[ 644] 00:18:23.450 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:18:23.450 slat (usec): min=14, max=139, avg=18.89, stdev= 6.03 00:18:23.450 clat (usec): min=75, max=256, avg=104.72, stdev=15.74 00:18:23.450 lat (usec): min=90, max=396, avg=123.61, stdev=17.13 00:18:23.450 clat percentiles (usec): 00:18:23.450 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:18:23.450 | 30.00th=[ 94], 40.00th=[ 98], 50.00th=[ 105], 60.00th=[ 110], 00:18:23.450 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 124], 95.00th=[ 133], 00:18:23.450 | 99.00th=[ 149], 99.50th=[ 159], 99.90th=[ 184], 99.95th=[ 217], 00:18:23.450 | 99.99th=[ 258] 00:18:23.450 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:18:23.450 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:18:23.450 lat (usec) : 100=21.81%, 250=77.66%, 500=0.51%, 750=0.01% 00:18:23.450 cpu : usr=2.40%, sys=7.70%, ctx=7081, majf=0, minf=2 00:18:23.450 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:23.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.450 issued rwts: total=3494,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.450 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:23.450 00:18:23.450 Run status group 0 (all jobs): 00:18:23.450 READ: bw=13.6MiB/s (14.3MB/s), 13.6MiB/s-13.6MiB/s (14.3MB/s-14.3MB/s), io=13.6MiB (14.3MB), run=1001-1001msec 00:18:23.450 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:18:23.450 00:18:23.450 Disk stats (read/write): 00:18:23.450 nvme0n1: ios=3122/3302, merge=0/0, ticks=470/367, in_queue=837, util=91.08% 00:18:23.450 09:59:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.746 rmmod nvme_tcp 00:18:23.746 rmmod nvme_fabrics 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76232 ']' 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76232 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 76232 ']' 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 76232 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:23.746 09:59:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 76232 00:18:23.746 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:23.746 killing process with pid 76232 00:18:23.746 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:23.746 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 76232' 00:18:23.746 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 76232 00:18:23.746 [2024-05-15 09:59:01.017530] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:23.746 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 76232 00:18:24.312 09:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.312 09:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.312 09:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.312 09:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.313 09:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.313 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.313 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.313 09:59:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:24.313 ************************************ 00:18:24.313 END TEST nvmf_nmic 00:18:24.313 ************************************ 00:18:24.313 00:18:24.313 real 0m6.282s 00:18:24.313 user 0m20.019s 00:18:24.313 sys 0m1.864s 00:18:24.313 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:24.313 09:59:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:24.313 09:59:01 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:24.313 09:59:01 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:24.313 09:59:01 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:24.313 09:59:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:24.313 ************************************ 00:18:24.313 START TEST nvmf_fio_target 00:18:24.313 ************************************ 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:24.313 * Looking for test storage... 00:18:24.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.313 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:24.571 Cannot find device "nvmf_tgt_br" 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.571 Cannot find device "nvmf_tgt_br2" 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:24.571 Cannot find device "nvmf_tgt_br" 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:24.571 Cannot find device "nvmf_tgt_br2" 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:24.571 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.829 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.829 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.829 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:24.829 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:24.829 09:59:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:24.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:18:24.829 00:18:24.829 --- 10.0.0.2 ping statistics --- 00:18:24.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.829 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:24.829 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.829 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:24.829 00:18:24.829 --- 10.0.0.3 ping statistics --- 00:18:24.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.829 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:24.829 00:18:24.829 --- 10.0.0.1 ping statistics --- 00:18:24.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.829 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76520 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76520 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 76520 ']' 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:24.829 09:59:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.829 [2024-05-15 09:59:02.155557] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:18:24.829 [2024-05-15 09:59:02.155824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.088 [2024-05-15 09:59:02.318306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.345 [2024-05-15 09:59:02.492846] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.345 [2024-05-15 09:59:02.493200] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.345 [2024-05-15 09:59:02.493348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.345 [2024-05-15 09:59:02.493422] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.345 [2024-05-15 09:59:02.493524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.345 [2024-05-15 09:59:02.493686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.345 [2024-05-15 09:59:02.493822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.345 [2024-05-15 09:59:02.494318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.346 [2024-05-15 09:59:02.494336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.909 09:59:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:25.909 09:59:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:18:25.909 09:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.909 09:59:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:25.909 09:59:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.166 09:59:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.166 09:59:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:26.422 [2024-05-15 09:59:03.622365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.422 09:59:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:26.679 09:59:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:26.679 09:59:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:26.936 09:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:26.936 09:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:27.501 09:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:27.501 09:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:27.758 09:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:27.758 09:59:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:28.015 09:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:28.272 09:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:28.272 09:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:28.529 09:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:28.529 09:59:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:28.802 09:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:28.802 09:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:29.366 09:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:29.624 09:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:29.624 09:59:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:29.891 09:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:29.891 09:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:30.158 09:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.158 [2024-05-15 09:59:07.508510] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:30.158 [2024-05-15 09:59:07.509298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.158 09:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:30.722 09:59:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:30.980 09:59:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:30.980 09:59:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:30.980 09:59:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:18:30.980 09:59:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.980 09:59:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:18:30.980 09:59:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:18:30.980 09:59:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:18:33.518 09:59:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:18:33.518 09:59:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:18:33.518 09:59:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:18:33.518 09:59:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:18:33.518 09:59:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:18:33.518 09:59:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:18:33.518 09:59:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:33.518 [global] 00:18:33.518 thread=1 00:18:33.518 invalidate=1 00:18:33.518 rw=write 00:18:33.518 time_based=1 00:18:33.518 runtime=1 00:18:33.518 ioengine=libaio 00:18:33.518 direct=1 00:18:33.518 bs=4096 00:18:33.518 iodepth=1 00:18:33.518 norandommap=0 00:18:33.518 numjobs=1 00:18:33.518 00:18:33.518 verify_dump=1 00:18:33.518 verify_backlog=512 00:18:33.518 verify_state_save=0 00:18:33.518 do_verify=1 00:18:33.518 verify=crc32c-intel 00:18:33.518 [job0] 00:18:33.518 filename=/dev/nvme0n1 00:18:33.518 [job1] 00:18:33.518 filename=/dev/nvme0n2 00:18:33.518 [job2] 00:18:33.518 filename=/dev/nvme0n3 00:18:33.518 [job3] 00:18:33.518 filename=/dev/nvme0n4 00:18:33.518 Could not set queue depth (nvme0n1) 00:18:33.518 Could not set queue depth (nvme0n2) 00:18:33.518 Could not set queue depth (nvme0n3) 00:18:33.518 Could not set queue depth (nvme0n4) 00:18:33.518 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:33.518 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:33.518 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:33.518 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:33.518 fio-3.35 00:18:33.518 Starting 4 threads 00:18:34.451 00:18:34.451 job0: (groupid=0, jobs=1): err= 0: pid=76831: Wed May 15 09:59:11 2024 00:18:34.451 read: IOPS=2762, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:18:34.451 slat (usec): min=9, max=800, avg=18.03, stdev=16.04 00:18:34.451 clat (usec): min=133, max=2944, avg=166.59, stdev=82.80 00:18:34.451 lat (usec): min=145, max=2956, avg=184.62, stdev=84.48 00:18:34.451 clat percentiles (usec): 00:18:34.451 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:18:34.451 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:18:34.451 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 202], 00:18:34.451 | 99.00th=[ 229], 99.50th=[ 241], 99.90th=[ 1844], 99.95th=[ 2507], 00:18:34.451 | 99.99th=[ 2933] 00:18:34.451 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:18:34.451 slat (usec): min=14, max=118, avg=26.29, stdev= 8.66 00:18:34.451 clat (usec): min=92, max=324, avg=129.34, stdev=19.57 00:18:34.451 lat (usec): min=114, max=443, avg=155.62, stdev=24.01 00:18:34.451 clat percentiles (usec): 00:18:34.451 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 116], 00:18:34.451 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 129], 00:18:34.451 | 70.00th=[ 135], 80.00th=[ 143], 90.00th=[ 155], 95.00th=[ 165], 00:18:34.451 | 99.00th=[ 208], 99.50th=[ 227], 99.90th=[ 239], 99.95th=[ 260], 00:18:34.451 | 99.99th=[ 326] 00:18:34.451 bw ( KiB/s): min=12288, max=12288, per=30.96%, avg=12288.00, stdev= 0.00, samples=1 00:18:34.451 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:34.451 lat (usec) : 100=0.05%, 250=99.76%, 500=0.09%, 1000=0.02% 00:18:34.451 lat (msec) : 2=0.05%, 4=0.03% 00:18:34.451 cpu : usr=2.40%, sys=10.10%, ctx=5886, majf=0, minf=5 00:18:34.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.451 issued rwts: total=2765,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.451 job1: (groupid=0, jobs=1): err= 0: pid=76832: Wed May 15 09:59:11 2024 00:18:34.451 read: IOPS=2890, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:18:34.451 slat (usec): min=9, max=934, avg=17.74, stdev=29.20 00:18:34.451 clat (usec): min=133, max=3229, avg=163.40, stdev=74.91 00:18:34.451 lat (usec): min=144, max=3252, avg=181.14, stdev=92.85 00:18:34.451 clat percentiles (usec): 00:18:34.451 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:18:34.451 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:18:34.451 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 178], 00:18:34.451 | 99.00th=[ 198], 99.50th=[ 255], 99.90th=[ 1156], 99.95th=[ 1860], 00:18:34.451 | 99.99th=[ 3228] 00:18:34.451 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:18:34.451 slat (usec): min=18, max=111, avg=26.35, stdev= 8.04 00:18:34.451 clat (usec): min=57, max=772, avg=125.22, stdev=19.94 00:18:34.451 lat (usec): min=119, max=804, avg=151.57, stdev=22.50 00:18:34.451 clat percentiles (usec): 00:18:34.451 | 1.00th=[ 106], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 116], 00:18:34.451 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:18:34.451 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:18:34.451 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 363], 99.95th=[ 404], 00:18:34.451 | 99.99th=[ 775] 00:18:34.451 bw ( KiB/s): min=12288, max=12288, per=30.96%, avg=12288.00, stdev= 0.00, samples=1 00:18:34.451 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:18:34.451 lat (usec) : 100=0.03%, 250=99.51%, 500=0.32%, 1000=0.05% 00:18:34.451 lat (msec) : 2=0.07%, 4=0.02% 00:18:34.451 cpu : usr=2.70%, sys=9.80%, ctx=5967, majf=0, minf=5 00:18:34.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.451 issued rwts: total=2893,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.451 job2: (groupid=0, jobs=1): err= 0: pid=76833: Wed May 15 09:59:11 2024 00:18:34.451 read: IOPS=1659, BW=6637KiB/s (6797kB/s)(6644KiB/1001msec) 00:18:34.451 slat (nsec): min=10041, max=97541, avg=16346.73, stdev=6488.95 00:18:34.451 clat (usec): min=162, max=3836, avg=286.30, stdev=135.33 00:18:34.451 lat (usec): min=176, max=3859, avg=302.65, stdev=135.87 00:18:34.451 clat percentiles (usec): 00:18:34.451 | 1.00th=[ 184], 5.00th=[ 215], 10.00th=[ 253], 20.00th=[ 265], 00:18:34.451 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:18:34.451 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:18:34.451 | 99.00th=[ 359], 99.50th=[ 400], 99.90th=[ 3326], 99.95th=[ 3851], 00:18:34.451 | 99.99th=[ 3851] 00:18:34.451 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:34.451 slat (usec): min=16, max=107, avg=27.60, stdev= 8.21 00:18:34.451 clat (usec): min=111, max=318, avg=211.92, stdev=38.85 00:18:34.451 lat (usec): min=139, max=414, avg=239.52, stdev=38.24 00:18:34.451 clat percentiles (usec): 00:18:34.451 | 1.00th=[ 124], 5.00th=[ 131], 10.00th=[ 139], 20.00th=[ 194], 00:18:34.451 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:18:34.451 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 255], 00:18:34.451 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 310], 00:18:34.451 | 99.99th=[ 318] 00:18:34.451 bw ( KiB/s): min= 8192, max= 8192, per=20.64%, avg=8192.00, stdev= 0.00, samples=1 00:18:34.451 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:34.451 lat (usec) : 250=54.35%, 500=45.54% 00:18:34.451 lat (msec) : 2=0.03%, 4=0.08% 00:18:34.451 cpu : usr=1.10%, sys=6.90%, ctx=3718, majf=0, minf=13 00:18:34.451 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.451 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.451 issued rwts: total=1661,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.451 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.451 job3: (groupid=0, jobs=1): err= 0: pid=76834: Wed May 15 09:59:11 2024 00:18:34.451 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:34.451 slat (usec): min=9, max=102, avg=17.67, stdev= 7.24 00:18:34.451 clat (usec): min=153, max=478, avg=327.65, stdev=56.61 00:18:34.452 lat (usec): min=170, max=502, avg=345.32, stdev=59.83 00:18:34.452 clat percentiles (usec): 00:18:34.452 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 285], 00:18:34.452 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:18:34.452 | 70.00th=[ 326], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 437], 00:18:34.452 | 99.00th=[ 453], 99.50th=[ 457], 99.90th=[ 469], 99.95th=[ 478], 00:18:34.452 | 99.99th=[ 478] 00:18:34.452 write: IOPS=1738, BW=6953KiB/s (7120kB/s)(6960KiB/1001msec); 0 zone resets 00:18:34.452 slat (usec): min=16, max=889, avg=30.05, stdev=26.28 00:18:34.452 clat (usec): min=6, max=3846, avg=236.01, stdev=126.60 00:18:34.452 lat (usec): min=148, max=3869, avg=266.05, stdev=134.43 00:18:34.452 clat percentiles (usec): 00:18:34.452 | 1.00th=[ 172], 5.00th=[ 192], 10.00th=[ 202], 20.00th=[ 210], 00:18:34.452 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:18:34.452 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:18:34.452 | 99.00th=[ 302], 99.50th=[ 322], 99.90th=[ 3359], 99.95th=[ 3851], 00:18:34.452 | 99.99th=[ 3851] 00:18:34.452 bw ( KiB/s): min= 8192, max= 8192, per=20.64%, avg=8192.00, stdev= 0.00, samples=1 00:18:34.452 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:34.452 lat (usec) : 10=0.03%, 250=41.97%, 500=57.88% 00:18:34.452 lat (msec) : 2=0.06%, 4=0.06% 00:18:34.452 cpu : usr=1.80%, sys=5.80%, ctx=3279, majf=0, minf=12 00:18:34.452 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.452 issued rwts: total=1536,1740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.452 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.452 00:18:34.452 Run status group 0 (all jobs): 00:18:34.452 READ: bw=34.6MiB/s (36.2MB/s), 6138KiB/s-11.3MiB/s (6285kB/s-11.8MB/s), io=34.6MiB (36.3MB), run=1001-1001msec 00:18:34.452 WRITE: bw=38.8MiB/s (40.6MB/s), 6953KiB/s-12.0MiB/s (7120kB/s-12.6MB/s), io=38.8MiB (40.7MB), run=1001-1001msec 00:18:34.452 00:18:34.452 Disk stats (read/write): 00:18:34.452 nvme0n1: ios=2396/2560, merge=0/0, ticks=418/354, in_queue=772, util=85.74% 00:18:34.452 nvme0n2: ios=2549/2560, merge=0/0, ticks=439/345, in_queue=784, util=86.82% 00:18:34.452 nvme0n3: ios=1536/1565, merge=0/0, ticks=440/353, in_queue=793, util=88.62% 00:18:34.452 nvme0n4: ios=1313/1536, merge=0/0, ticks=424/342, in_queue=766, util=89.47% 00:18:34.452 09:59:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:34.452 [global] 00:18:34.452 thread=1 00:18:34.452 invalidate=1 00:18:34.452 rw=randwrite 00:18:34.452 time_based=1 00:18:34.452 runtime=1 00:18:34.452 ioengine=libaio 00:18:34.452 direct=1 00:18:34.452 bs=4096 00:18:34.452 iodepth=1 00:18:34.452 norandommap=0 00:18:34.452 numjobs=1 00:18:34.452 00:18:34.452 verify_dump=1 00:18:34.452 verify_backlog=512 00:18:34.452 verify_state_save=0 00:18:34.452 do_verify=1 00:18:34.452 verify=crc32c-intel 00:18:34.452 [job0] 00:18:34.452 filename=/dev/nvme0n1 00:18:34.452 [job1] 00:18:34.452 filename=/dev/nvme0n2 00:18:34.452 [job2] 00:18:34.452 filename=/dev/nvme0n3 00:18:34.452 [job3] 00:18:34.452 filename=/dev/nvme0n4 00:18:34.709 Could not set queue depth (nvme0n1) 00:18:34.709 Could not set queue depth (nvme0n2) 00:18:34.709 Could not set queue depth (nvme0n3) 00:18:34.709 Could not set queue depth (nvme0n4) 00:18:34.709 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.709 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.709 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.709 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:34.709 fio-3.35 00:18:34.709 Starting 4 threads 00:18:36.081 00:18:36.081 job0: (groupid=0, jobs=1): err= 0: pid=76887: Wed May 15 09:59:13 2024 00:18:36.081 read: IOPS=1153, BW=4615KiB/s (4726kB/s)(4620KiB/1001msec) 00:18:36.081 slat (nsec): min=7087, max=64510, avg=13890.17, stdev=4790.84 00:18:36.081 clat (usec): min=141, max=1092, avg=455.28, stdev=209.31 00:18:36.081 lat (usec): min=151, max=1110, avg=469.17, stdev=211.16 00:18:36.081 clat percentiles (usec): 00:18:36.081 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 182], 20.00th=[ 265], 00:18:36.081 | 30.00th=[ 314], 40.00th=[ 338], 50.00th=[ 453], 60.00th=[ 498], 00:18:36.081 | 70.00th=[ 537], 80.00th=[ 693], 90.00th=[ 775], 95.00th=[ 807], 00:18:36.081 | 99.00th=[ 881], 99.50th=[ 930], 99.90th=[ 1004], 99.95th=[ 1090], 00:18:36.081 | 99.99th=[ 1090] 00:18:36.081 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:36.081 slat (usec): min=12, max=205, avg=27.13, stdev=21.03 00:18:36.081 clat (usec): min=42, max=745, avg=268.25, stdev=125.31 00:18:36.081 lat (usec): min=120, max=762, avg=295.38, stdev=125.46 00:18:36.081 clat percentiles (usec): 00:18:36.081 | 1.00th=[ 64], 5.00th=[ 117], 10.00th=[ 122], 20.00th=[ 131], 00:18:36.081 | 30.00th=[ 149], 40.00th=[ 202], 50.00th=[ 273], 60.00th=[ 330], 00:18:36.081 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 457], 00:18:36.081 | 99.00th=[ 523], 99.50th=[ 553], 99.90th=[ 594], 99.95th=[ 742], 00:18:36.081 | 99.99th=[ 742] 00:18:36.081 bw ( KiB/s): min= 8192, max= 8192, per=29.29%, avg=8192.00, stdev= 0.00, samples=1 00:18:36.081 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:36.081 lat (usec) : 50=0.11%, 100=1.15%, 250=33.63%, 500=47.05%, 750=12.19% 00:18:36.081 lat (usec) : 1000=5.80% 00:18:36.081 lat (msec) : 2=0.07% 00:18:36.081 cpu : usr=1.00%, sys=4.30%, ctx=2934, majf=0, minf=14 00:18:36.081 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.081 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.081 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.081 issued rwts: total=1155,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.081 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.081 job1: (groupid=0, jobs=1): err= 0: pid=76888: Wed May 15 09:59:13 2024 00:18:36.081 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:18:36.081 slat (nsec): min=9708, max=75375, avg=16655.54, stdev=6437.97 00:18:36.081 clat (usec): min=140, max=1040, avg=229.00, stdev=46.30 00:18:36.081 lat (usec): min=157, max=1054, avg=245.66, stdev=47.65 00:18:36.081 clat percentiles (usec): 00:18:36.081 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:18:36.081 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 229], 00:18:36.081 | 70.00th=[ 241], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 297], 00:18:36.081 | 99.00th=[ 326], 99.50th=[ 453], 99.90th=[ 717], 99.95th=[ 791], 00:18:36.082 | 99.99th=[ 1037] 00:18:36.082 write: IOPS=2497, BW=9990KiB/s (10.2MB/s)(9.77MiB/1001msec); 0 zone resets 00:18:36.082 slat (usec): min=14, max=173, avg=24.20, stdev= 8.35 00:18:36.082 clat (usec): min=98, max=575, avg=171.35, stdev=33.55 00:18:36.082 lat (usec): min=114, max=591, avg=195.55, stdev=36.73 00:18:36.082 clat percentiles (usec): 00:18:36.082 | 1.00th=[ 120], 5.00th=[ 135], 10.00th=[ 141], 20.00th=[ 147], 00:18:36.082 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 172], 00:18:36.082 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 219], 95.00th=[ 237], 00:18:36.082 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 433], 99.95th=[ 490], 00:18:36.082 | 99.99th=[ 578] 00:18:36.082 bw ( KiB/s): min= 9664, max= 9664, per=34.55%, avg=9664.00, stdev= 0.00, samples=1 00:18:36.082 iops : min= 2416, max= 2416, avg=2416.00, stdev= 0.00, samples=1 00:18:36.082 lat (usec) : 100=0.04%, 250=87.16%, 500=12.62%, 750=0.13%, 1000=0.02% 00:18:36.082 lat (msec) : 2=0.02% 00:18:36.082 cpu : usr=1.60%, sys=7.30%, ctx=4551, majf=0, minf=3 00:18:36.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.082 issued rwts: total=2048,2500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.082 job2: (groupid=0, jobs=1): err= 0: pid=76889: Wed May 15 09:59:13 2024 00:18:36.082 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:36.082 slat (nsec): min=10667, max=95188, avg=17641.03, stdev=5689.65 00:18:36.082 clat (usec): min=180, max=835, avg=447.95, stdev=115.92 00:18:36.082 lat (usec): min=199, max=853, avg=465.59, stdev=116.71 00:18:36.082 clat percentiles (usec): 00:18:36.082 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 322], 00:18:36.082 | 30.00th=[ 338], 40.00th=[ 420], 50.00th=[ 465], 60.00th=[ 486], 00:18:36.082 | 70.00th=[ 515], 80.00th=[ 545], 90.00th=[ 594], 95.00th=[ 644], 00:18:36.082 | 99.00th=[ 734], 99.50th=[ 758], 99.90th=[ 832], 99.95th=[ 832], 00:18:36.082 | 99.99th=[ 832] 00:18:36.082 write: IOPS=1491, BW=5966KiB/s (6109kB/s)(5972KiB/1001msec); 0 zone resets 00:18:36.082 slat (usec): min=12, max=178, avg=28.17, stdev= 8.90 00:18:36.082 clat (usec): min=109, max=1865, avg=318.60, stdev=97.92 00:18:36.082 lat (usec): min=143, max=1894, avg=346.77, stdev=98.14 00:18:36.082 clat percentiles (usec): 00:18:36.082 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 221], 00:18:36.082 | 30.00th=[ 237], 40.00th=[ 289], 50.00th=[ 326], 60.00th=[ 351], 00:18:36.082 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 465], 00:18:36.082 | 99.00th=[ 537], 99.50th=[ 562], 99.90th=[ 660], 99.95th=[ 1860], 00:18:36.082 | 99.99th=[ 1860] 00:18:36.082 bw ( KiB/s): min= 7576, max= 7576, per=27.09%, avg=7576.00, stdev= 0.00, samples=1 00:18:36.082 iops : min= 1894, max= 1894, avg=1894.00, stdev= 0.00, samples=1 00:18:36.082 lat (usec) : 250=19.90%, 500=64.08%, 750=15.69%, 1000=0.28% 00:18:36.082 lat (msec) : 2=0.04% 00:18:36.082 cpu : usr=1.60%, sys=4.50%, ctx=2558, majf=0, minf=13 00:18:36.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.082 issued rwts: total=1024,1493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.082 job3: (groupid=0, jobs=1): err= 0: pid=76890: Wed May 15 09:59:13 2024 00:18:36.082 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:36.082 slat (nsec): min=7946, max=69694, avg=13900.58, stdev=5106.21 00:18:36.082 clat (usec): min=205, max=857, avg=456.51, stdev=120.24 00:18:36.082 lat (usec): min=269, max=874, avg=470.41, stdev=120.22 00:18:36.082 clat percentiles (usec): 00:18:36.082 | 1.00th=[ 277], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 326], 00:18:36.082 | 30.00th=[ 343], 40.00th=[ 429], 50.00th=[ 469], 60.00th=[ 498], 00:18:36.082 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 611], 95.00th=[ 652], 00:18:36.082 | 99.00th=[ 750], 99.50th=[ 799], 99.90th=[ 840], 99.95th=[ 857], 00:18:36.082 | 99.99th=[ 857] 00:18:36.082 write: IOPS=1468, BW=5874KiB/s (6015kB/s)(5880KiB/1001msec); 0 zone resets 00:18:36.082 slat (usec): min=11, max=187, avg=24.05, stdev=10.05 00:18:36.082 clat (usec): min=94, max=1788, avg=325.48, stdev=101.08 00:18:36.082 lat (usec): min=171, max=1810, avg=349.53, stdev=99.17 00:18:36.082 clat percentiles (usec): 00:18:36.082 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 223], 00:18:36.082 | 30.00th=[ 239], 40.00th=[ 297], 50.00th=[ 334], 60.00th=[ 355], 00:18:36.082 | 70.00th=[ 383], 80.00th=[ 408], 90.00th=[ 445], 95.00th=[ 469], 00:18:36.082 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 799], 99.95th=[ 1795], 00:18:36.082 | 99.99th=[ 1795] 00:18:36.082 bw ( KiB/s): min= 7496, max= 7496, per=26.80%, avg=7496.00, stdev= 0.00, samples=1 00:18:36.082 iops : min= 1874, max= 1874, avg=1874.00, stdev= 0.00, samples=1 00:18:36.082 lat (usec) : 100=0.04%, 250=19.45%, 500=62.87%, 750=17.12%, 1000=0.48% 00:18:36.082 lat (msec) : 2=0.04% 00:18:36.082 cpu : usr=1.00%, sys=4.20%, ctx=2586, majf=0, minf=15 00:18:36.082 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:36.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.082 issued rwts: total=1024,1470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.082 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:36.082 00:18:36.082 Run status group 0 (all jobs): 00:18:36.082 READ: bw=20.5MiB/s (21.5MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=20.5MiB (21.5MB), run=1001-1001msec 00:18:36.082 WRITE: bw=27.3MiB/s (28.6MB/s), 5874KiB/s-9990KiB/s (6015kB/s-10.2MB/s), io=27.3MiB (28.7MB), run=1001-1001msec 00:18:36.082 00:18:36.082 Disk stats (read/write): 00:18:36.082 nvme0n1: ios=1074/1436, merge=0/0, ticks=435/382, in_queue=817, util=88.08% 00:18:36.082 nvme0n2: ios=1852/2048, merge=0/0, ticks=445/374, in_queue=819, util=88.25% 00:18:36.082 nvme0n3: ios=1051/1159, merge=0/0, ticks=472/355, in_queue=827, util=89.45% 00:18:36.082 nvme0n4: ios=1024/1137, merge=0/0, ticks=456/324, in_queue=780, util=89.69% 00:18:36.082 09:59:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:36.082 [global] 00:18:36.082 thread=1 00:18:36.082 invalidate=1 00:18:36.082 rw=write 00:18:36.082 time_based=1 00:18:36.082 runtime=1 00:18:36.082 ioengine=libaio 00:18:36.082 direct=1 00:18:36.082 bs=4096 00:18:36.082 iodepth=128 00:18:36.082 norandommap=0 00:18:36.082 numjobs=1 00:18:36.082 00:18:36.082 verify_dump=1 00:18:36.082 verify_backlog=512 00:18:36.082 verify_state_save=0 00:18:36.082 do_verify=1 00:18:36.082 verify=crc32c-intel 00:18:36.082 [job0] 00:18:36.082 filename=/dev/nvme0n1 00:18:36.082 [job1] 00:18:36.082 filename=/dev/nvme0n2 00:18:36.082 [job2] 00:18:36.082 filename=/dev/nvme0n3 00:18:36.082 [job3] 00:18:36.082 filename=/dev/nvme0n4 00:18:36.082 Could not set queue depth (nvme0n1) 00:18:36.082 Could not set queue depth (nvme0n2) 00:18:36.082 Could not set queue depth (nvme0n3) 00:18:36.082 Could not set queue depth (nvme0n4) 00:18:36.082 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:36.082 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:36.082 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:36.082 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:36.082 fio-3.35 00:18:36.082 Starting 4 threads 00:18:37.458 00:18:37.458 job0: (groupid=0, jobs=1): err= 0: pid=76948: Wed May 15 09:59:14 2024 00:18:37.458 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:18:37.458 slat (usec): min=4, max=7767, avg=244.42, stdev=876.33 00:18:37.458 clat (usec): min=8503, max=38450, avg=30486.39, stdev=3439.89 00:18:37.458 lat (usec): min=8515, max=38463, avg=30730.81, stdev=3377.21 00:18:37.458 clat percentiles (usec): 00:18:37.458 | 1.00th=[12256], 5.00th=[26346], 10.00th=[27657], 20.00th=[28967], 00:18:37.458 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:18:37.458 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32900], 95.00th=[34866], 00:18:37.458 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:18:37.458 | 99.99th=[38536] 00:18:37.458 write: IOPS=2067, BW=8271KiB/s (8469kB/s)(8312KiB/1005msec); 0 zone resets 00:18:37.458 slat (usec): min=5, max=8196, avg=233.37, stdev=912.52 00:18:37.459 clat (usec): min=2774, max=38730, avg=30763.25, stdev=3808.37 00:18:37.459 lat (usec): min=5325, max=38746, avg=30996.62, stdev=3731.92 00:18:37.459 clat percentiles (usec): 00:18:37.459 | 1.00th=[ 8291], 5.00th=[27657], 10.00th=[28181], 20.00th=[29230], 00:18:37.459 | 30.00th=[29754], 40.00th=[30540], 50.00th=[31327], 60.00th=[31589], 00:18:37.459 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33424], 95.00th=[35914], 00:18:37.459 | 99.00th=[38011], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:18:37.459 | 99.99th=[38536] 00:18:37.459 bw ( KiB/s): min= 8192, max= 8192, per=17.43%, avg=8192.00, stdev= 0.00, samples=2 00:18:37.459 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:18:37.459 lat (msec) : 4=0.02%, 10=0.95%, 20=1.04%, 50=97.99% 00:18:37.459 cpu : usr=1.49%, sys=5.38%, ctx=851, majf=0, minf=11 00:18:37.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:37.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.459 issued rwts: total=2048,2078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.459 job1: (groupid=0, jobs=1): err= 0: pid=76949: Wed May 15 09:59:14 2024 00:18:37.459 read: IOPS=2030, BW=8123KiB/s (8318kB/s)(8164KiB/1005msec) 00:18:37.459 slat (usec): min=4, max=7663, avg=247.06, stdev=914.53 00:18:37.459 clat (usec): min=921, max=39839, avg=30323.07, stdev=4005.98 00:18:37.459 lat (usec): min=8269, max=39853, avg=30570.13, stdev=3926.66 00:18:37.459 clat percentiles (usec): 00:18:37.459 | 1.00th=[ 8717], 5.00th=[24249], 10.00th=[27395], 20.00th=[29492], 00:18:37.459 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:18:37.459 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32900], 95.00th=[33817], 00:18:37.459 | 99.00th=[36963], 99.50th=[37487], 99.90th=[39584], 99.95th=[39584], 00:18:37.459 | 99.99th=[39584] 00:18:37.459 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:18:37.459 slat (usec): min=6, max=7874, avg=235.52, stdev=854.36 00:18:37.459 clat (usec): min=22147, max=39815, avg=31262.49, stdev=2369.85 00:18:37.459 lat (usec): min=23818, max=39831, avg=31498.01, stdev=2265.34 00:18:37.459 clat percentiles (usec): 00:18:37.459 | 1.00th=[24249], 5.00th=[28181], 10.00th=[29230], 20.00th=[29492], 00:18:37.459 | 30.00th=[30278], 40.00th=[30540], 50.00th=[31065], 60.00th=[31589], 00:18:37.459 | 70.00th=[31851], 80.00th=[32375], 90.00th=[33817], 95.00th=[36439], 00:18:37.459 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:18:37.459 | 99.99th=[39584] 00:18:37.459 bw ( KiB/s): min= 8192, max= 8208, per=17.45%, avg=8200.00, stdev=11.31, samples=2 00:18:37.459 iops : min= 2048, max= 2052, avg=2050.00, stdev= 2.83, samples=2 00:18:37.459 lat (usec) : 1000=0.02% 00:18:37.459 lat (msec) : 10=0.78%, 20=0.78%, 50=98.41% 00:18:37.459 cpu : usr=2.39%, sys=4.58%, ctx=820, majf=0, minf=9 00:18:37.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:37.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.459 issued rwts: total=2041,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.459 job2: (groupid=0, jobs=1): err= 0: pid=76950: Wed May 15 09:59:14 2024 00:18:37.459 read: IOPS=5359, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1002msec) 00:18:37.459 slat (usec): min=6, max=2863, avg=89.38, stdev=399.23 00:18:37.459 clat (usec): min=1599, max=14264, avg=11730.88, stdev=1167.54 00:18:37.459 lat (usec): min=1613, max=15361, avg=11820.26, stdev=1116.60 00:18:37.459 clat percentiles (usec): 00:18:37.459 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11469], 00:18:37.459 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:18:37.459 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12649], 95.00th=[13042], 00:18:37.459 | 99.00th=[13566], 99.50th=[13566], 99.90th=[13698], 99.95th=[13829], 00:18:37.459 | 99.99th=[14222] 00:18:37.459 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:18:37.459 slat (usec): min=7, max=2979, avg=84.57, stdev=370.67 00:18:37.459 clat (usec): min=8547, max=13927, avg=11293.38, stdev=1066.75 00:18:37.459 lat (usec): min=8951, max=13944, avg=11377.95, stdev=1051.21 00:18:37.459 clat percentiles (usec): 00:18:37.459 | 1.00th=[ 9241], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:18:37.459 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:18:37.459 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:18:37.459 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13829], 99.95th=[13960], 00:18:37.459 | 99.99th=[13960] 00:18:37.459 bw ( KiB/s): min=22204, max=22896, per=47.99%, avg=22550.00, stdev=489.32, samples=2 00:18:37.459 iops : min= 5551, max= 5724, avg=5637.50, stdev=122.33, samples=2 00:18:37.459 lat (msec) : 2=0.14%, 10=10.19%, 20=89.67% 00:18:37.459 cpu : usr=5.49%, sys=13.49%, ctx=658, majf=0, minf=15 00:18:37.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:37.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.459 issued rwts: total=5370,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.459 job3: (groupid=0, jobs=1): err= 0: pid=76951: Wed May 15 09:59:14 2024 00:18:37.459 read: IOPS=2031, BW=8127KiB/s (8323kB/s)(8160KiB/1004msec) 00:18:37.459 slat (usec): min=4, max=9466, avg=246.40, stdev=1009.52 00:18:37.459 clat (usec): min=1102, max=37904, avg=30418.90, stdev=4444.64 00:18:37.459 lat (usec): min=5785, max=38037, avg=30665.30, stdev=4352.97 00:18:37.459 clat percentiles (usec): 00:18:37.459 | 1.00th=[ 6063], 5.00th=[24511], 10.00th=[28181], 20.00th=[30016], 00:18:37.459 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31327], 60.00th=[31589], 00:18:37.459 | 70.00th=[31851], 80.00th=[32375], 90.00th=[32900], 95.00th=[33817], 00:18:37.459 | 99.00th=[35914], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:18:37.459 | 99.99th=[38011] 00:18:37.459 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:18:37.459 slat (usec): min=4, max=8660, avg=236.15, stdev=894.17 00:18:37.459 clat (usec): min=22259, max=37129, avg=31156.05, stdev=2007.75 00:18:37.459 lat (usec): min=25218, max=38553, avg=31392.20, stdev=1835.79 00:18:37.459 clat percentiles (usec): 00:18:37.459 | 1.00th=[24511], 5.00th=[28181], 10.00th=[28967], 20.00th=[29754], 00:18:37.459 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:18:37.459 | 70.00th=[31851], 80.00th=[32375], 90.00th=[33162], 95.00th=[34866], 00:18:37.459 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:18:37.459 | 99.99th=[36963] 00:18:37.459 bw ( KiB/s): min= 8192, max= 8192, per=17.43%, avg=8192.00, stdev= 0.00, samples=2 00:18:37.459 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:18:37.459 lat (msec) : 2=0.02%, 10=0.78%, 20=0.78%, 50=98.41% 00:18:37.459 cpu : usr=1.69%, sys=5.18%, ctx=888, majf=0, minf=16 00:18:37.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:18:37.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.459 issued rwts: total=2040,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.459 00:18:37.459 Run status group 0 (all jobs): 00:18:37.459 READ: bw=44.7MiB/s (46.9MB/s), 8123KiB/s-20.9MiB/s (8318kB/s-22.0MB/s), io=44.9MiB (47.1MB), run=1002-1005msec 00:18:37.459 WRITE: bw=45.9MiB/s (48.1MB/s), 8151KiB/s-22.0MiB/s (8347kB/s-23.0MB/s), io=46.1MiB (48.4MB), run=1002-1005msec 00:18:37.459 00:18:37.459 Disk stats (read/write): 00:18:37.459 nvme0n1: ios=1586/2000, merge=0/0, ticks=11531/14032, in_queue=25563, util=88.26% 00:18:37.459 nvme0n2: ios=1584/1976, merge=0/0, ticks=11734/14218, in_queue=25952, util=89.47% 00:18:37.459 nvme0n3: ios=4625/4811, merge=0/0, ticks=12691/11821, in_queue=24512, util=89.27% 00:18:37.459 nvme0n4: ios=1536/1951, merge=0/0, ticks=11566/14048, in_queue=25614, util=89.50% 00:18:37.459 09:59:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:37.459 [global] 00:18:37.459 thread=1 00:18:37.459 invalidate=1 00:18:37.459 rw=randwrite 00:18:37.459 time_based=1 00:18:37.459 runtime=1 00:18:37.459 ioengine=libaio 00:18:37.459 direct=1 00:18:37.459 bs=4096 00:18:37.459 iodepth=128 00:18:37.459 norandommap=0 00:18:37.459 numjobs=1 00:18:37.459 00:18:37.459 verify_dump=1 00:18:37.459 verify_backlog=512 00:18:37.459 verify_state_save=0 00:18:37.459 do_verify=1 00:18:37.459 verify=crc32c-intel 00:18:37.459 [job0] 00:18:37.459 filename=/dev/nvme0n1 00:18:37.459 [job1] 00:18:37.459 filename=/dev/nvme0n2 00:18:37.459 [job2] 00:18:37.459 filename=/dev/nvme0n3 00:18:37.459 [job3] 00:18:37.459 filename=/dev/nvme0n4 00:18:37.459 Could not set queue depth (nvme0n1) 00:18:37.459 Could not set queue depth (nvme0n2) 00:18:37.459 Could not set queue depth (nvme0n3) 00:18:37.459 Could not set queue depth (nvme0n4) 00:18:37.459 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:37.459 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:37.459 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:37.460 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:37.460 fio-3.35 00:18:37.460 Starting 4 threads 00:18:38.835 00:18:38.835 job0: (groupid=0, jobs=1): err= 0: pid=77014: Wed May 15 09:59:15 2024 00:18:38.835 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:18:38.835 slat (usec): min=8, max=6419, avg=140.81, stdev=613.79 00:18:38.835 clat (usec): min=10257, max=24711, avg=17568.77, stdev=2819.56 00:18:38.835 lat (usec): min=10278, max=24729, avg=17709.58, stdev=2862.98 00:18:38.835 clat percentiles (usec): 00:18:38.835 | 1.00th=[11076], 5.00th=[12256], 10.00th=[13304], 20.00th=[14746], 00:18:38.835 | 30.00th=[16057], 40.00th=[17695], 50.00th=[18482], 60.00th=[18744], 00:18:38.835 | 70.00th=[19006], 80.00th=[19530], 90.00th=[20841], 95.00th=[22152], 00:18:38.835 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24773], 99.95th=[24773], 00:18:38.835 | 99.99th=[24773] 00:18:38.835 write: IOPS=3628, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1006msec); 0 zone resets 00:18:38.835 slat (usec): min=9, max=6044, avg=127.57, stdev=458.76 00:18:38.835 clat (usec): min=5135, max=24566, avg=17528.92, stdev=2873.71 00:18:38.835 lat (usec): min=6256, max=24580, avg=17656.49, stdev=2898.66 00:18:38.835 clat percentiles (usec): 00:18:38.835 | 1.00th=[ 8979], 5.00th=[13173], 10.00th=[13566], 20.00th=[14222], 00:18:38.835 | 30.00th=[16909], 40.00th=[17957], 50.00th=[18482], 60.00th=[18744], 00:18:38.835 | 70.00th=[19268], 80.00th=[19530], 90.00th=[20055], 95.00th=[21627], 00:18:38.835 | 99.00th=[23462], 99.50th=[24249], 99.90th=[24511], 99.95th=[24511], 00:18:38.835 | 99.99th=[24511] 00:18:38.835 bw ( KiB/s): min=14032, max=14640, per=37.07%, avg=14336.00, stdev=429.92, samples=2 00:18:38.835 iops : min= 3508, max= 3660, avg=3584.00, stdev=107.48, samples=2 00:18:38.835 lat (msec) : 10=0.83%, 20=87.12%, 50=12.05% 00:18:38.835 cpu : usr=3.58%, sys=8.66%, ctx=581, majf=0, minf=12 00:18:38.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:38.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:38.835 issued rwts: total=3584,3650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:38.835 job1: (groupid=0, jobs=1): err= 0: pid=77015: Wed May 15 09:59:15 2024 00:18:38.835 read: IOPS=1535, BW=6141KiB/s (6288kB/s)(6196KiB/1009msec) 00:18:38.835 slat (usec): min=6, max=30683, avg=273.97, stdev=1823.37 00:18:38.835 clat (usec): min=7301, max=72518, avg=34148.09, stdev=13080.74 00:18:38.835 lat (usec): min=8501, max=72539, avg=34422.06, stdev=13216.52 00:18:38.835 clat percentiles (usec): 00:18:38.835 | 1.00th=[13698], 5.00th=[20055], 10.00th=[21103], 20.00th=[21890], 00:18:38.835 | 30.00th=[26084], 40.00th=[26608], 50.00th=[28181], 60.00th=[33817], 00:18:38.835 | 70.00th=[42730], 80.00th=[45351], 90.00th=[55313], 95.00th=[60556], 00:18:38.835 | 99.00th=[67634], 99.50th=[69731], 99.90th=[72877], 99.95th=[72877], 00:18:38.835 | 99.99th=[72877] 00:18:38.835 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:18:38.835 slat (usec): min=4, max=24751, avg=273.38, stdev=1435.93 00:18:38.835 clat (usec): min=8161, max=72908, avg=36554.21, stdev=14444.30 00:18:38.835 lat (usec): min=8181, max=72932, avg=36827.60, stdev=14543.49 00:18:38.835 clat percentiles (usec): 00:18:38.835 | 1.00th=[11338], 5.00th=[17957], 10.00th=[19268], 20.00th=[23987], 00:18:38.835 | 30.00th=[26608], 40.00th=[27395], 50.00th=[33817], 60.00th=[42206], 00:18:38.835 | 70.00th=[44827], 80.00th=[50594], 90.00th=[57934], 95.00th=[60556], 00:18:38.835 | 99.00th=[70779], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:18:38.835 | 99.99th=[72877] 00:18:38.836 bw ( KiB/s): min= 7280, max= 8208, per=20.02%, avg=7744.00, stdev=656.20, samples=2 00:18:38.836 iops : min= 1820, max= 2052, avg=1936.00, stdev=164.05, samples=2 00:18:38.836 lat (msec) : 10=0.64%, 20=7.81%, 50=74.81%, 100=16.74% 00:18:38.836 cpu : usr=1.88%, sys=4.56%, ctx=335, majf=0, minf=5 00:18:38.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:18:38.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:38.836 issued rwts: total=1549,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:38.836 job2: (groupid=0, jobs=1): err= 0: pid=77016: Wed May 15 09:59:15 2024 00:18:38.836 read: IOPS=1419, BW=5678KiB/s (5814kB/s)(5752KiB/1013msec) 00:18:38.836 slat (usec): min=8, max=20605, avg=244.24, stdev=1304.95 00:18:38.836 clat (usec): min=470, max=83071, avg=34827.34, stdev=17327.09 00:18:38.836 lat (usec): min=4851, max=97467, avg=35071.58, stdev=17354.11 00:18:38.836 clat percentiles (usec): 00:18:38.836 | 1.00th=[ 5145], 5.00th=[13698], 10.00th=[18482], 20.00th=[19792], 00:18:38.836 | 30.00th=[27395], 40.00th=[32375], 50.00th=[33162], 60.00th=[33817], 00:18:38.836 | 70.00th=[34866], 80.00th=[40109], 90.00th=[65274], 95.00th=[76022], 00:18:38.836 | 99.00th=[82314], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:18:38.836 | 99.99th=[83362] 00:18:38.836 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:18:38.836 slat (usec): min=6, max=48009, avg=415.16, stdev=2631.99 00:18:38.836 clat (msec): min=16, max=150, avg=45.37, stdev=27.15 00:18:38.836 lat (msec): min=16, max=150, avg=45.79, stdev=27.32 00:18:38.836 clat percentiles (msec): 00:18:38.836 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 27], 20.00th=[ 29], 00:18:38.836 | 30.00th=[ 30], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 40], 00:18:38.836 | 70.00th=[ 46], 80.00th=[ 62], 90.00th=[ 80], 95.00th=[ 97], 00:18:38.836 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:18:38.836 | 99.99th=[ 150] 00:18:38.836 bw ( KiB/s): min= 4232, max= 8064, per=15.90%, avg=6148.00, stdev=2709.63, samples=2 00:18:38.836 iops : min= 1058, max= 2016, avg=1537.00, stdev=677.41, samples=2 00:18:38.836 lat (usec) : 500=0.03% 00:18:38.836 lat (msec) : 10=2.15%, 20=9.25%, 50=68.49%, 100=17.65%, 250=2.42% 00:18:38.836 cpu : usr=1.98%, sys=4.05%, ctx=120, majf=0, minf=13 00:18:38.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:18:38.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:38.836 issued rwts: total=1438,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:38.836 job3: (groupid=0, jobs=1): err= 0: pid=77017: Wed May 15 09:59:15 2024 00:18:38.836 read: IOPS=2378, BW=9514KiB/s (9743kB/s)(9600KiB/1009msec) 00:18:38.836 slat (usec): min=3, max=24525, avg=213.17, stdev=1198.75 00:18:38.836 clat (usec): min=178, max=81792, avg=24854.10, stdev=11426.11 00:18:38.836 lat (usec): min=9903, max=81818, avg=25067.26, stdev=11522.98 00:18:38.836 clat percentiles (usec): 00:18:38.836 | 1.00th=[12256], 5.00th=[15664], 10.00th=[17433], 20.00th=[19006], 00:18:38.836 | 30.00th=[19792], 40.00th=[20579], 50.00th=[21890], 60.00th=[22152], 00:18:38.836 | 70.00th=[22676], 80.00th=[26608], 90.00th=[39584], 95.00th=[50594], 00:18:38.836 | 99.00th=[71828], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:18:38.836 | 99.99th=[82314] 00:18:38.836 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:18:38.836 slat (usec): min=5, max=17281, avg=184.04, stdev=927.59 00:18:38.836 clat (usec): min=12388, max=64717, avg=26414.74, stdev=11066.59 00:18:38.836 lat (usec): min=12408, max=64727, avg=26598.78, stdev=11125.02 00:18:38.836 clat percentiles (usec): 00:18:38.836 | 1.00th=[13566], 5.00th=[17433], 10.00th=[18744], 20.00th=[19530], 00:18:38.836 | 30.00th=[20317], 40.00th=[20841], 50.00th=[21627], 60.00th=[23200], 00:18:38.836 | 70.00th=[24249], 80.00th=[33817], 90.00th=[44827], 95.00th=[50594], 00:18:38.836 | 99.00th=[64226], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:18:38.836 | 99.99th=[64750] 00:18:38.836 bw ( KiB/s): min= 8192, max=12312, per=26.51%, avg=10252.00, stdev=2913.28, samples=2 00:18:38.836 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:18:38.836 lat (usec) : 250=0.02% 00:18:38.836 lat (msec) : 10=0.10%, 20=28.99%, 50=65.77%, 100=5.12% 00:18:38.836 cpu : usr=3.27%, sys=7.14%, ctx=460, majf=0, minf=7 00:18:38.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:38.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:38.836 issued rwts: total=2400,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:38.836 00:18:38.836 Run status group 0 (all jobs): 00:18:38.836 READ: bw=34.6MiB/s (36.3MB/s), 5678KiB/s-13.9MiB/s (5814kB/s-14.6MB/s), io=35.0MiB (36.7MB), run=1006-1013msec 00:18:38.836 WRITE: bw=37.8MiB/s (39.6MB/s), 6065KiB/s-14.2MiB/s (6211kB/s-14.9MB/s), io=38.3MiB (40.1MB), run=1006-1013msec 00:18:38.836 00:18:38.836 Disk stats (read/write): 00:18:38.836 nvme0n1: ios=3053/3072, merge=0/0, ticks=16871/17167, in_queue=34038, util=88.33% 00:18:38.836 nvme0n2: ios=1486/1536, merge=0/0, ticks=38177/58080, in_queue=96257, util=88.99% 00:18:38.836 nvme0n3: ios=1045/1206, merge=0/0, ticks=9684/16261, in_queue=25945, util=89.69% 00:18:38.836 nvme0n4: ios=2048/2515, merge=0/0, ticks=21445/28095, in_queue=49540, util=88.89% 00:18:38.836 09:59:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:38.836 09:59:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77034 00:18:38.836 09:59:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:38.836 09:59:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:38.836 [global] 00:18:38.836 thread=1 00:18:38.836 invalidate=1 00:18:38.836 rw=read 00:18:38.836 time_based=1 00:18:38.836 runtime=10 00:18:38.836 ioengine=libaio 00:18:38.836 direct=1 00:18:38.836 bs=4096 00:18:38.836 iodepth=1 00:18:38.836 norandommap=1 00:18:38.836 numjobs=1 00:18:38.836 00:18:38.836 [job0] 00:18:38.836 filename=/dev/nvme0n1 00:18:38.836 [job1] 00:18:38.836 filename=/dev/nvme0n2 00:18:38.836 [job2] 00:18:38.836 filename=/dev/nvme0n3 00:18:38.836 [job3] 00:18:38.836 filename=/dev/nvme0n4 00:18:38.836 Could not set queue depth (nvme0n1) 00:18:38.836 Could not set queue depth (nvme0n2) 00:18:38.836 Could not set queue depth (nvme0n3) 00:18:38.836 Could not set queue depth (nvme0n4) 00:18:39.095 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.095 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.095 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.095 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.095 fio-3.35 00:18:39.095 Starting 4 threads 00:18:42.379 09:59:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:42.379 fio: pid=77077, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:42.379 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=26927104, buflen=4096 00:18:42.379 09:59:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:42.379 fio: pid=77076, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:42.379 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=47661056, buflen=4096 00:18:42.379 09:59:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:42.379 09:59:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:42.637 fio: pid=77074, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:42.637 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=21413888, buflen=4096 00:18:42.637 09:59:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:42.637 09:59:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:42.897 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=33329152, buflen=4096 00:18:42.897 fio: pid=77075, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:42.897 00:18:42.897 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77074: Wed May 15 09:59:20 2024 00:18:42.897 read: IOPS=1513, BW=6053KiB/s (6198kB/s)(20.4MiB/3455msec) 00:18:42.897 slat (usec): min=7, max=10456, avg=25.28, stdev=259.70 00:18:42.897 clat (usec): min=4, max=12668, avg=633.52, stdev=526.12 00:18:42.897 lat (usec): min=159, max=12708, avg=658.80, stdev=588.57 00:18:42.897 clat percentiles (usec): 00:18:42.897 | 1.00th=[ 184], 5.00th=[ 269], 10.00th=[ 302], 20.00th=[ 379], 00:18:42.897 | 30.00th=[ 420], 40.00th=[ 478], 50.00th=[ 586], 60.00th=[ 668], 00:18:42.897 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 848], 95.00th=[ 1123], 00:18:42.897 | 99.00th=[ 2606], 99.50th=[ 3621], 99.90th=[ 7046], 99.95th=[10028], 00:18:42.897 | 99.99th=[12649] 00:18:42.897 bw ( KiB/s): min= 3480, max= 9456, per=17.57%, avg=5938.67, stdev=1981.94, samples=6 00:18:42.897 iops : min= 870, max= 2364, avg=1484.67, stdev=495.48, samples=6 00:18:42.897 lat (usec) : 10=0.02%, 250=2.54%, 500=40.73%, 750=38.54%, 1000=12.05% 00:18:42.897 lat (msec) : 2=4.34%, 4=1.36%, 10=0.34%, 20=0.06% 00:18:42.897 cpu : usr=0.69%, sys=2.37%, ctx=5254, majf=0, minf=1 00:18:42.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.897 issued rwts: total=5229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.897 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77075: Wed May 15 09:59:20 2024 00:18:42.897 read: IOPS=2177, BW=8710KiB/s (8919kB/s)(31.8MiB/3737msec) 00:18:42.897 slat (usec): min=7, max=8725, avg=21.39, stdev=211.56 00:18:42.897 clat (usec): min=120, max=10761, avg=435.56, stdev=465.71 00:18:42.897 lat (usec): min=134, max=10799, avg=456.95, stdev=515.11 00:18:42.897 clat percentiles (usec): 00:18:42.897 | 1.00th=[ 135], 5.00th=[ 153], 10.00th=[ 169], 20.00th=[ 192], 00:18:42.897 | 30.00th=[ 217], 40.00th=[ 237], 50.00th=[ 265], 60.00th=[ 306], 00:18:42.897 | 70.00th=[ 603], 80.00th=[ 693], 90.00th=[ 758], 95.00th=[ 889], 00:18:42.897 | 99.00th=[ 2024], 99.50th=[ 2966], 99.90th=[ 6390], 99.95th=[ 8455], 00:18:42.897 | 99.99th=[10814] 00:18:42.897 bw ( KiB/s): min= 3480, max=16536, per=24.98%, avg=8442.43, stdev=5013.43, samples=7 00:18:42.897 iops : min= 870, max= 4134, avg=2110.57, stdev=1253.36, samples=7 00:18:42.897 lat (usec) : 250=45.32%, 500=21.04%, 750=22.92%, 1000=7.20% 00:18:42.897 lat (msec) : 2=2.45%, 4=0.79%, 10=0.26%, 20=0.02% 00:18:42.897 cpu : usr=0.88%, sys=2.97%, ctx=8168, majf=0, minf=1 00:18:42.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.897 issued rwts: total=8138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.897 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77076: Wed May 15 09:59:20 2024 00:18:42.897 read: IOPS=3624, BW=14.2MiB/s (14.8MB/s)(45.5MiB/3211msec) 00:18:42.897 slat (usec): min=8, max=17819, avg=18.37, stdev=196.01 00:18:42.897 clat (usec): min=4, max=20203, avg=256.16, stdev=275.66 00:18:42.897 lat (usec): min=151, max=20216, avg=274.53, stdev=338.32 00:18:42.897 clat percentiles (usec): 00:18:42.897 | 1.00th=[ 161], 5.00th=[ 178], 10.00th=[ 194], 20.00th=[ 225], 00:18:42.897 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 251], 60.00th=[ 260], 00:18:42.897 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 306], 00:18:42.897 | 99.00th=[ 347], 99.50th=[ 412], 99.90th=[ 1713], 99.95th=[ 5866], 00:18:42.897 | 99.99th=[11731] 00:18:42.897 bw ( KiB/s): min=12808, max=16360, per=42.93%, avg=14508.00, stdev=1307.28, samples=6 00:18:42.897 iops : min= 3202, max= 4090, avg=3627.00, stdev=326.82, samples=6 00:18:42.897 lat (usec) : 10=0.04%, 20=0.01%, 100=0.04%, 250=48.04%, 500=51.61% 00:18:42.897 lat (usec) : 750=0.09%, 1000=0.04% 00:18:42.897 lat (msec) : 2=0.03%, 4=0.03%, 10=0.03%, 20=0.02%, 50=0.01% 00:18:42.897 cpu : usr=1.12%, sys=4.61%, ctx=11683, majf=0, minf=1 00:18:42.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.897 issued rwts: total=11637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.897 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77077: Wed May 15 09:59:20 2024 00:18:42.897 read: IOPS=2227, BW=8908KiB/s (9122kB/s)(25.7MiB/2952msec) 00:18:42.897 slat (usec): min=8, max=101, avg=18.79, stdev= 8.18 00:18:42.897 clat (usec): min=138, max=8255, avg=428.09, stdev=320.62 00:18:42.897 lat (usec): min=165, max=8294, avg=446.88, stdev=322.59 00:18:42.897 clat percentiles (usec): 00:18:42.897 | 1.00th=[ 202], 5.00th=[ 243], 10.00th=[ 265], 20.00th=[ 297], 00:18:42.897 | 30.00th=[ 322], 40.00th=[ 347], 50.00th=[ 379], 60.00th=[ 408], 00:18:42.897 | 70.00th=[ 441], 80.00th=[ 469], 90.00th=[ 529], 95.00th=[ 750], 00:18:42.897 | 99.00th=[ 1663], 99.50th=[ 2245], 99.90th=[ 4555], 99.95th=[ 6259], 00:18:42.897 | 99.99th=[ 8225] 00:18:42.897 bw ( KiB/s): min= 7376, max=10648, per=28.14%, avg=9510.40, stdev=1304.99, samples=5 00:18:42.897 iops : min= 1844, max= 2662, avg=2377.60, stdev=326.25, samples=5 00:18:42.897 lat (usec) : 250=6.27%, 500=80.33%, 750=8.44%, 1000=2.28% 00:18:42.897 lat (msec) : 2=1.92%, 4=0.62%, 10=0.12% 00:18:42.897 cpu : usr=0.91%, sys=3.49%, ctx=6576, majf=0, minf=2 00:18:42.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:42.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.897 issued rwts: total=6575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:42.897 00:18:42.897 Run status group 0 (all jobs): 00:18:42.897 READ: bw=33.0MiB/s (34.6MB/s), 6053KiB/s-14.2MiB/s (6198kB/s-14.8MB/s), io=123MiB (129MB), run=2952-3737msec 00:18:42.897 00:18:42.897 Disk stats (read/write): 00:18:42.897 nvme0n1: ios=5050/0, merge=0/0, ticks=3172/0, in_queue=3172, util=94.76% 00:18:42.897 nvme0n2: ios=7625/0, merge=0/0, ticks=3384/0, in_queue=3384, util=95.07% 00:18:42.897 nvme0n3: ios=11260/0, merge=0/0, ticks=2879/0, in_queue=2879, util=94.38% 00:18:42.897 nvme0n4: ios=6460/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.83% 00:18:42.897 09:59:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:42.897 09:59:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:43.156 09:59:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:43.156 09:59:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:43.723 09:59:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:43.723 09:59:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:43.980 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:43.980 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:44.239 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:44.239 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77034 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:44.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:18:44.807 09:59:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:44.807 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:18:44.807 09:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:44.807 nvmf hotplug test: fio failed as expected 00:18:44.807 09:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:44.807 09:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.065 rmmod nvme_tcp 00:18:45.065 rmmod nvme_fabrics 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76520 ']' 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76520 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 76520 ']' 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 76520 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:45.065 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 76520 00:18:45.324 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:45.325 killing process with pid 76520 00:18:45.325 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:45.325 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 76520' 00:18:45.325 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 76520 00:18:45.325 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 76520 00:18:45.325 [2024-05-15 09:59:22.465812] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:45.584 00:18:45.584 real 0m21.353s 00:18:45.584 user 1m21.058s 00:18:45.584 sys 0m8.779s 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:45.584 09:59:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.584 ************************************ 00:18:45.584 END TEST nvmf_fio_target 00:18:45.584 ************************************ 00:18:45.584 09:59:22 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:45.584 09:59:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:45.584 09:59:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:45.584 09:59:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.843 ************************************ 00:18:45.843 START TEST nvmf_bdevio 00:18:45.843 ************************************ 00:18:45.843 09:59:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:45.843 * Looking for test storage... 00:18:45.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.843 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:45.844 Cannot find device "nvmf_tgt_br" 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.844 Cannot find device "nvmf_tgt_br2" 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:45.844 Cannot find device "nvmf_tgt_br" 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:45.844 Cannot find device "nvmf_tgt_br2" 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:18:45.844 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:46.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:46.103 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:46.103 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:46.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:18:46.362 00:18:46.362 --- 10.0.0.2 ping statistics --- 00:18:46.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.362 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:46.362 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:46.362 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:18:46.362 00:18:46.362 --- 10.0.0.3 ping statistics --- 00:18:46.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.362 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:46.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:46.362 00:18:46.362 --- 10.0.0.1 ping statistics --- 00:18:46.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.362 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77414 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77414 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 77414 ']' 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:46.362 09:59:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:46.362 [2024-05-15 09:59:23.660242] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:18:46.362 [2024-05-15 09:59:23.660372] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.620 [2024-05-15 09:59:23.824658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:46.878 [2024-05-15 09:59:24.033982] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.878 [2024-05-15 09:59:24.034048] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.878 [2024-05-15 09:59:24.034064] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.878 [2024-05-15 09:59:24.034078] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.878 [2024-05-15 09:59:24.034103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.878 [2024-05-15 09:59:24.034296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:46.878 [2024-05-15 09:59:24.034910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:46.878 [2024-05-15 09:59:24.035040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:46.878 [2024-05-15 09:59:24.035045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.491 [2024-05-15 09:59:24.761646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.491 Malloc0 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.491 09:59:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.492 [2024-05-15 09:59:24.854526] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:47.492 [2024-05-15 09:59:24.855213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:47.492 { 00:18:47.492 "params": { 00:18:47.492 "name": "Nvme$subsystem", 00:18:47.492 "trtype": "$TEST_TRANSPORT", 00:18:47.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:47.492 "adrfam": "ipv4", 00:18:47.492 "trsvcid": "$NVMF_PORT", 00:18:47.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:47.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:47.492 "hdgst": ${hdgst:-false}, 00:18:47.492 "ddgst": ${ddgst:-false} 00:18:47.492 }, 00:18:47.492 "method": "bdev_nvme_attach_controller" 00:18:47.492 } 00:18:47.492 EOF 00:18:47.492 )") 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:47.492 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:47.750 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:47.750 09:59:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:47.750 "params": { 00:18:47.750 "name": "Nvme1", 00:18:47.750 "trtype": "tcp", 00:18:47.750 "traddr": "10.0.0.2", 00:18:47.750 "adrfam": "ipv4", 00:18:47.750 "trsvcid": "4420", 00:18:47.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.750 "hdgst": false, 00:18:47.750 "ddgst": false 00:18:47.750 }, 00:18:47.750 "method": "bdev_nvme_attach_controller" 00:18:47.750 }' 00:18:47.750 [2024-05-15 09:59:24.916316] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:18:47.750 [2024-05-15 09:59:24.916425] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77481 ] 00:18:47.750 [2024-05-15 09:59:25.070451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:48.008 [2024-05-15 09:59:25.249018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.009 [2024-05-15 09:59:25.249228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.009 [2024-05-15 09:59:25.249238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.267 I/O targets: 00:18:48.267 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:48.267 00:18:48.267 00:18:48.267 CUnit - A unit testing framework for C - Version 2.1-3 00:18:48.267 http://cunit.sourceforge.net/ 00:18:48.267 00:18:48.267 00:18:48.267 Suite: bdevio tests on: Nvme1n1 00:18:48.267 Test: blockdev write read block ...passed 00:18:48.267 Test: blockdev write zeroes read block ...passed 00:18:48.267 Test: blockdev write zeroes read no split ...passed 00:18:48.267 Test: blockdev write zeroes read split ...passed 00:18:48.267 Test: blockdev write zeroes read split partial ...passed 00:18:48.267 Test: blockdev reset ...[2024-05-15 09:59:25.615180] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.267 [2024-05-15 09:59:25.615323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e73660 (9): Bad file descriptor 00:18:48.267 [2024-05-15 09:59:25.633450] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:48.267 passed 00:18:48.267 Test: blockdev write read 8 blocks ...passed 00:18:48.267 Test: blockdev write read size > 128k ...passed 00:18:48.267 Test: blockdev write read invalid size ...passed 00:18:48.525 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:48.525 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:48.525 Test: blockdev write read max offset ...passed 00:18:48.525 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:48.525 Test: blockdev writev readv 8 blocks ...passed 00:18:48.525 Test: blockdev writev readv 30 x 1block ...passed 00:18:48.525 Test: blockdev writev readv block ...passed 00:18:48.525 Test: blockdev writev readv size > 128k ...passed 00:18:48.525 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:48.525 Test: blockdev comparev and writev ...[2024-05-15 09:59:25.804288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.525 [2024-05-15 09:59:25.804344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.804360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.525 [2024-05-15 09:59:25.804370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.804799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.525 [2024-05-15 09:59:25.804815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.804830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.525 [2024-05-15 09:59:25.804841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.805349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.525 [2024-05-15 09:59:25.805376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.805392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.525 [2024-05-15 09:59:25.805404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.806027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.525 [2024-05-15 09:59:25.806059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.806075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:48.525 [2024-05-15 09:59:25.806086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:48.525 passed 00:18:48.525 Test: blockdev nvme passthru rw ...passed 00:18:48.525 Test: blockdev nvme passthru vendor specific ...[2024-05-15 09:59:25.888579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:48.525 [2024-05-15 09:59:25.888650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.888797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:48.525 [2024-05-15 09:59:25.888821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.888950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:48.525 [2024-05-15 09:59:25.888971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:48.525 [2024-05-15 09:59:25.889124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:48.525 [2024-05-15 09:59:25.889148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:48.525 passed 00:18:48.525 Test: blockdev nvme admin passthru ...passed 00:18:48.784 Test: blockdev copy ...passed 00:18:48.784 00:18:48.784 Run Summary: Type Total Ran Passed Failed Inactive 00:18:48.784 suites 1 1 n/a 0 0 00:18:48.784 tests 23 23 23 0 0 00:18:48.784 asserts 152 152 152 0 n/a 00:18:48.784 00:18:48.784 Elapsed time = 0.907 seconds 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.042 rmmod nvme_tcp 00:18:49.042 rmmod nvme_fabrics 00:18:49.042 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77414 ']' 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77414 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 77414 ']' 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 77414 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 77414 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:18:49.301 killing process with pid 77414 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 77414' 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 77414 00:18:49.301 [2024-05-15 09:59:26.464951] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:49.301 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 77414 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.870 09:59:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:49.870 00:18:49.870 real 0m4.035s 00:18:49.870 user 0m13.032s 00:18:49.870 sys 0m1.371s 00:18:49.870 09:59:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:49.870 09:59:27 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:49.870 ************************************ 00:18:49.870 END TEST nvmf_bdevio 00:18:49.870 ************************************ 00:18:49.870 09:59:27 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:49.870 09:59:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:49.870 09:59:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:49.870 09:59:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:49.870 ************************************ 00:18:49.870 START TEST nvmf_auth_target 00:18:49.870 ************************************ 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:49.870 * Looking for test storage... 00:18:49.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:49.870 Cannot find device "nvmf_tgt_br" 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.870 Cannot find device "nvmf_tgt_br2" 00:18:49.870 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:18:49.871 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:49.871 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:50.129 Cannot find device "nvmf_tgt_br" 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:50.129 Cannot find device "nvmf_tgt_br2" 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:50.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:50.129 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:50.129 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:50.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:18:50.388 00:18:50.388 --- 10.0.0.2 ping statistics --- 00:18:50.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.388 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:50.388 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:50.388 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:18:50.388 00:18:50.388 --- 10.0.0.3 ping statistics --- 00:18:50.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.388 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:18:50.388 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:50.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:18:50.388 00:18:50.388 --- 10.0.0.1 ping statistics --- 00:18:50.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.388 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77661 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77661 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 77661 ']' 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:50.389 09:59:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.339 09:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:51.339 09:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:51.339 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.339 09:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:51.339 09:59:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=77705 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=00c4c1e0d390735cf17180d003116c5896f2df062672c0ee 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.66H 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 00c4c1e0d390735cf17180d003116c5896f2df062672c0ee 0 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 00c4c1e0d390735cf17180d003116c5896f2df062672c0ee 0 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=00c4c1e0d390735cf17180d003116c5896f2df062672c0ee 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.66H 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.66H 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.66H 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=85bb821245e81e0aa02243984fea5c4b 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Ew8 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 85bb821245e81e0aa02243984fea5c4b 1 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 85bb821245e81e0aa02243984fea5c4b 1 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=85bb821245e81e0aa02243984fea5c4b 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Ew8 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Ew8 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.Ew8 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4a3ad8230f8771678f8f1a7576c0f36417eaee77867f00d5 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.WGq 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4a3ad8230f8771678f8f1a7576c0f36417eaee77867f00d5 2 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4a3ad8230f8771678f8f1a7576c0f36417eaee77867f00d5 2 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4a3ad8230f8771678f8f1a7576c0f36417eaee77867f00d5 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.WGq 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.WGq 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.WGq 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=be651e5de46c816e4ae07929e696f9ca710462fc61f07aee0201e08db7c6222c 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.YF4 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key be651e5de46c816e4ae07929e696f9ca710462fc61f07aee0201e08db7c6222c 3 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 be651e5de46c816e4ae07929e696f9ca710462fc61f07aee0201e08db7c6222c 3 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:51.598 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:51.599 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=be651e5de46c816e4ae07929e696f9ca710462fc61f07aee0201e08db7c6222c 00:18:51.599 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:51.599 09:59:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.YF4 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.YF4 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.YF4 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 77661 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 77661 ']' 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:51.857 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 77705 /var/tmp/host.sock 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 77705 ']' 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:18:52.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:52.116 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.66H 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.66H 00:18:52.373 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.66H 00:18:52.629 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:52.629 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Ew8 00:18:52.629 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.629 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.629 09:59:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.630 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Ew8 00:18:52.630 09:59:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Ew8 00:18:52.886 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:52.886 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.WGq 00:18:52.886 09:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.886 09:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.145 09:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.145 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.WGq 00:18:53.145 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.WGq 00:18:53.145 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:18:53.145 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.YF4 00:18:53.145 09:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.145 09:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.416 09:59:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.416 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.YF4 00:18:53.416 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.YF4 00:18:53.699 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:53.699 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.699 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:53.699 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.699 09:59:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:53.958 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:54.215 00:18:54.215 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:54.215 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:54.215 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.473 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.473 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.473 09:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.473 09:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.473 09:59:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.473 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:54.473 { 00:18:54.473 "auth": { 00:18:54.473 "dhgroup": "null", 00:18:54.473 "digest": "sha256", 00:18:54.473 "state": "completed" 00:18:54.473 }, 00:18:54.473 "cntlid": 1, 00:18:54.473 "listen_address": { 00:18:54.473 "adrfam": "IPv4", 00:18:54.473 "traddr": "10.0.0.2", 00:18:54.473 "trsvcid": "4420", 00:18:54.473 "trtype": "TCP" 00:18:54.473 }, 00:18:54.473 "peer_address": { 00:18:54.473 "adrfam": "IPv4", 00:18:54.473 "traddr": "10.0.0.1", 00:18:54.473 "trsvcid": "41444", 00:18:54.473 "trtype": "TCP" 00:18:54.473 }, 00:18:54.473 "qid": 0, 00:18:54.473 "state": "enabled" 00:18:54.473 } 00:18:54.473 ]' 00:18:54.473 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:54.732 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.732 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:54.732 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:54.732 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:54.732 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.732 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.732 09:59:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.990 09:59:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:19:00.326 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.326 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:00.326 09:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.326 09:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.326 09:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:00.327 09:59:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:00.327 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:00.327 { 00:19:00.327 "auth": { 00:19:00.327 "dhgroup": "null", 00:19:00.327 "digest": "sha256", 00:19:00.327 "state": "completed" 00:19:00.327 }, 00:19:00.327 "cntlid": 3, 00:19:00.327 "listen_address": { 00:19:00.327 "adrfam": "IPv4", 00:19:00.327 "traddr": "10.0.0.2", 00:19:00.327 "trsvcid": "4420", 00:19:00.327 "trtype": "TCP" 00:19:00.327 }, 00:19:00.327 "peer_address": { 00:19:00.327 "adrfam": "IPv4", 00:19:00.327 "traddr": "10.0.0.1", 00:19:00.327 "trsvcid": "41460", 00:19:00.327 "trtype": "TCP" 00:19:00.327 }, 00:19:00.327 "qid": 0, 00:19:00.327 "state": "enabled" 00:19:00.327 } 00:19:00.327 ]' 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:00.327 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:00.585 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.585 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.585 09:59:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.843 09:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:19:01.776 09:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.776 09:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:01.776 09:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.776 09:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.776 09:59:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.776 09:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:01.776 09:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.776 09:59:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:02.034 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:02.290 00:19:02.290 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:02.290 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:02.290 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.854 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.854 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.854 09:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.854 09:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.854 09:59:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.854 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:02.854 { 00:19:02.854 "auth": { 00:19:02.854 "dhgroup": "null", 00:19:02.854 "digest": "sha256", 00:19:02.854 "state": "completed" 00:19:02.854 }, 00:19:02.854 "cntlid": 5, 00:19:02.854 "listen_address": { 00:19:02.854 "adrfam": "IPv4", 00:19:02.854 "traddr": "10.0.0.2", 00:19:02.854 "trsvcid": "4420", 00:19:02.854 "trtype": "TCP" 00:19:02.854 }, 00:19:02.854 "peer_address": { 00:19:02.854 "adrfam": "IPv4", 00:19:02.854 "traddr": "10.0.0.1", 00:19:02.854 "trsvcid": "51132", 00:19:02.854 "trtype": "TCP" 00:19:02.854 }, 00:19:02.854 "qid": 0, 00:19:02.854 "state": "enabled" 00:19:02.854 } 00:19:02.854 ]' 00:19:02.854 09:59:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:02.854 09:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.854 09:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:02.854 09:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:02.854 09:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:02.854 09:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.854 09:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.854 09:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.419 09:59:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:19:03.983 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.983 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:03.983 09:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.983 09:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.240 09:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.240 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:04.240 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.240 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.498 09:59:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.771 00:19:04.771 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:04.771 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:04.771 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:05.336 { 00:19:05.336 "auth": { 00:19:05.336 "dhgroup": "null", 00:19:05.336 "digest": "sha256", 00:19:05.336 "state": "completed" 00:19:05.336 }, 00:19:05.336 "cntlid": 7, 00:19:05.336 "listen_address": { 00:19:05.336 "adrfam": "IPv4", 00:19:05.336 "traddr": "10.0.0.2", 00:19:05.336 "trsvcid": "4420", 00:19:05.336 "trtype": "TCP" 00:19:05.336 }, 00:19:05.336 "peer_address": { 00:19:05.336 "adrfam": "IPv4", 00:19:05.336 "traddr": "10.0.0.1", 00:19:05.336 "trsvcid": "51160", 00:19:05.336 "trtype": "TCP" 00:19:05.336 }, 00:19:05.336 "qid": 0, 00:19:05.336 "state": "enabled" 00:19:05.336 } 00:19:05.336 ]' 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.336 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.594 09:59:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.529 09:59:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:06.786 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:07.044 00:19:07.044 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:07.044 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:07.044 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:07.614 { 00:19:07.614 "auth": { 00:19:07.614 "dhgroup": "ffdhe2048", 00:19:07.614 "digest": "sha256", 00:19:07.614 "state": "completed" 00:19:07.614 }, 00:19:07.614 "cntlid": 9, 00:19:07.614 "listen_address": { 00:19:07.614 "adrfam": "IPv4", 00:19:07.614 "traddr": "10.0.0.2", 00:19:07.614 "trsvcid": "4420", 00:19:07.614 "trtype": "TCP" 00:19:07.614 }, 00:19:07.614 "peer_address": { 00:19:07.614 "adrfam": "IPv4", 00:19:07.614 "traddr": "10.0.0.1", 00:19:07.614 "trsvcid": "51176", 00:19:07.614 "trtype": "TCP" 00:19:07.614 }, 00:19:07.614 "qid": 0, 00:19:07.614 "state": "enabled" 00:19:07.614 } 00:19:07.614 ]' 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.614 09:59:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.872 09:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:19:08.805 09:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.805 09:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:08.805 09:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.805 09:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.805 09:59:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.805 09:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:08.805 09:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.805 09:59:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:08.805 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:09.371 00:19:09.371 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:09.371 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.371 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:09.642 { 00:19:09.642 "auth": { 00:19:09.642 "dhgroup": "ffdhe2048", 00:19:09.642 "digest": "sha256", 00:19:09.642 "state": "completed" 00:19:09.642 }, 00:19:09.642 "cntlid": 11, 00:19:09.642 "listen_address": { 00:19:09.642 "adrfam": "IPv4", 00:19:09.642 "traddr": "10.0.0.2", 00:19:09.642 "trsvcid": "4420", 00:19:09.642 "trtype": "TCP" 00:19:09.642 }, 00:19:09.642 "peer_address": { 00:19:09.642 "adrfam": "IPv4", 00:19:09.642 "traddr": "10.0.0.1", 00:19:09.642 "trsvcid": "51216", 00:19:09.642 "trtype": "TCP" 00:19:09.642 }, 00:19:09.642 "qid": 0, 00:19:09.642 "state": "enabled" 00:19:09.642 } 00:19:09.642 ]' 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.642 09:59:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.937 09:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:19:10.869 09:59:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.869 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:10.869 09:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.869 09:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.869 09:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.869 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:10.869 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.869 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.127 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:11.384 00:19:11.642 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:11.642 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:11.642 09:59:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:11.955 { 00:19:11.955 "auth": { 00:19:11.955 "dhgroup": "ffdhe2048", 00:19:11.955 "digest": "sha256", 00:19:11.955 "state": "completed" 00:19:11.955 }, 00:19:11.955 "cntlid": 13, 00:19:11.955 "listen_address": { 00:19:11.955 "adrfam": "IPv4", 00:19:11.955 "traddr": "10.0.0.2", 00:19:11.955 "trsvcid": "4420", 00:19:11.955 "trtype": "TCP" 00:19:11.955 }, 00:19:11.955 "peer_address": { 00:19:11.955 "adrfam": "IPv4", 00:19:11.955 "traddr": "10.0.0.1", 00:19:11.955 "trsvcid": "45466", 00:19:11.955 "trtype": "TCP" 00:19:11.955 }, 00:19:11.955 "qid": 0, 00:19:11.955 "state": "enabled" 00:19:11.955 } 00:19:11.955 ]' 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.955 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.520 09:59:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:19:13.088 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.088 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:13.088 09:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.088 09:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.088 09:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.088 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:13.088 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.088 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.347 09:59:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.915 00:19:13.915 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:13.915 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.915 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:13.915 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.915 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.915 09:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:13.915 09:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:14.175 { 00:19:14.175 "auth": { 00:19:14.175 "dhgroup": "ffdhe2048", 00:19:14.175 "digest": "sha256", 00:19:14.175 "state": "completed" 00:19:14.175 }, 00:19:14.175 "cntlid": 15, 00:19:14.175 "listen_address": { 00:19:14.175 "adrfam": "IPv4", 00:19:14.175 "traddr": "10.0.0.2", 00:19:14.175 "trsvcid": "4420", 00:19:14.175 "trtype": "TCP" 00:19:14.175 }, 00:19:14.175 "peer_address": { 00:19:14.175 "adrfam": "IPv4", 00:19:14.175 "traddr": "10.0.0.1", 00:19:14.175 "trsvcid": "45486", 00:19:14.175 "trtype": "TCP" 00:19:14.175 }, 00:19:14.175 "qid": 0, 00:19:14.175 "state": "enabled" 00:19:14.175 } 00:19:14.175 ]' 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.175 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.433 09:59:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.367 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:15.626 09:59:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:15.887 00:19:15.887 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:15.887 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:15.887 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.150 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.150 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.150 09:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.150 09:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.150 09:59:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.150 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:16.150 { 00:19:16.150 "auth": { 00:19:16.150 "dhgroup": "ffdhe3072", 00:19:16.150 "digest": "sha256", 00:19:16.150 "state": "completed" 00:19:16.150 }, 00:19:16.150 "cntlid": 17, 00:19:16.150 "listen_address": { 00:19:16.150 "adrfam": "IPv4", 00:19:16.150 "traddr": "10.0.0.2", 00:19:16.150 "trsvcid": "4420", 00:19:16.150 "trtype": "TCP" 00:19:16.150 }, 00:19:16.150 "peer_address": { 00:19:16.150 "adrfam": "IPv4", 00:19:16.150 "traddr": "10.0.0.1", 00:19:16.150 "trsvcid": "45512", 00:19:16.151 "trtype": "TCP" 00:19:16.151 }, 00:19:16.151 "qid": 0, 00:19:16.151 "state": "enabled" 00:19:16.151 } 00:19:16.151 ]' 00:19:16.151 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:16.151 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:16.151 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:16.151 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.151 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:16.409 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.409 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.409 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.409 09:59:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:19:17.342 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.342 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:17.343 09:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.343 09:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.343 09:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.343 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:17.343 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.343 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:17.600 09:59:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:17.858 00:19:17.858 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:17.858 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:17.858 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:18.116 { 00:19:18.116 "auth": { 00:19:18.116 "dhgroup": "ffdhe3072", 00:19:18.116 "digest": "sha256", 00:19:18.116 "state": "completed" 00:19:18.116 }, 00:19:18.116 "cntlid": 19, 00:19:18.116 "listen_address": { 00:19:18.116 "adrfam": "IPv4", 00:19:18.116 "traddr": "10.0.0.2", 00:19:18.116 "trsvcid": "4420", 00:19:18.116 "trtype": "TCP" 00:19:18.116 }, 00:19:18.116 "peer_address": { 00:19:18.116 "adrfam": "IPv4", 00:19:18.116 "traddr": "10.0.0.1", 00:19:18.116 "trsvcid": "45538", 00:19:18.116 "trtype": "TCP" 00:19:18.116 }, 00:19:18.116 "qid": 0, 00:19:18.116 "state": "enabled" 00:19:18.116 } 00:19:18.116 ]' 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.116 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:18.421 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.421 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:18.421 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.421 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.421 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.679 09:59:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:19:19.246 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.246 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:19.246 09:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.246 09:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.246 09:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.246 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:19.246 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.246 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:19.505 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:19.763 00:19:19.763 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:19.763 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:19.763 09:59:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:20.329 { 00:19:20.329 "auth": { 00:19:20.329 "dhgroup": "ffdhe3072", 00:19:20.329 "digest": "sha256", 00:19:20.329 "state": "completed" 00:19:20.329 }, 00:19:20.329 "cntlid": 21, 00:19:20.329 "listen_address": { 00:19:20.329 "adrfam": "IPv4", 00:19:20.329 "traddr": "10.0.0.2", 00:19:20.329 "trsvcid": "4420", 00:19:20.329 "trtype": "TCP" 00:19:20.329 }, 00:19:20.329 "peer_address": { 00:19:20.329 "adrfam": "IPv4", 00:19:20.329 "traddr": "10.0.0.1", 00:19:20.329 "trsvcid": "45566", 00:19:20.329 "trtype": "TCP" 00:19:20.329 }, 00:19:20.329 "qid": 0, 00:19:20.329 "state": "enabled" 00:19:20.329 } 00:19:20.329 ]' 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.329 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.588 09:59:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:19:21.524 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.524 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:21.524 09:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.524 09:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.524 09:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.524 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:21.524 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.524 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.783 09:59:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.042 00:19:22.042 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:22.042 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:22.042 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.301 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.301 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.301 09:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:22.301 09:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.559 09:59:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:22.559 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:22.559 { 00:19:22.559 "auth": { 00:19:22.559 "dhgroup": "ffdhe3072", 00:19:22.559 "digest": "sha256", 00:19:22.559 "state": "completed" 00:19:22.559 }, 00:19:22.559 "cntlid": 23, 00:19:22.559 "listen_address": { 00:19:22.559 "adrfam": "IPv4", 00:19:22.559 "traddr": "10.0.0.2", 00:19:22.559 "trsvcid": "4420", 00:19:22.559 "trtype": "TCP" 00:19:22.559 }, 00:19:22.559 "peer_address": { 00:19:22.559 "adrfam": "IPv4", 00:19:22.559 "traddr": "10.0.0.1", 00:19:22.559 "trsvcid": "55446", 00:19:22.559 "trtype": "TCP" 00:19:22.559 }, 00:19:22.559 "qid": 0, 00:19:22.559 "state": "enabled" 00:19:22.559 } 00:19:22.560 ]' 00:19:22.560 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:22.560 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.560 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:22.560 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.560 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:22.560 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.560 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.560 09:59:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.824 10:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:23.761 10:00:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:23.761 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:19:23.761 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:23.761 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.761 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.761 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.761 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:19:23.761 10:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:23.761 10:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.020 10:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.020 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:24.020 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:24.278 00:19:24.278 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:24.278 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:24.278 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:24.535 { 00:19:24.535 "auth": { 00:19:24.535 "dhgroup": "ffdhe4096", 00:19:24.535 "digest": "sha256", 00:19:24.535 "state": "completed" 00:19:24.535 }, 00:19:24.535 "cntlid": 25, 00:19:24.535 "listen_address": { 00:19:24.535 "adrfam": "IPv4", 00:19:24.535 "traddr": "10.0.0.2", 00:19:24.535 "trsvcid": "4420", 00:19:24.535 "trtype": "TCP" 00:19:24.535 }, 00:19:24.535 "peer_address": { 00:19:24.535 "adrfam": "IPv4", 00:19:24.535 "traddr": "10.0.0.1", 00:19:24.535 "trsvcid": "55484", 00:19:24.535 "trtype": "TCP" 00:19:24.535 }, 00:19:24.535 "qid": 0, 00:19:24.535 "state": "enabled" 00:19:24.535 } 00:19:24.535 ]' 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.535 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:24.793 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.793 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:24.793 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.793 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.793 10:00:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.049 10:00:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:25.975 10:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.232 10:00:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.232 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:26.232 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:26.490 00:19:26.490 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:26.490 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:26.490 10:00:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:26.748 { 00:19:26.748 "auth": { 00:19:26.748 "dhgroup": "ffdhe4096", 00:19:26.748 "digest": "sha256", 00:19:26.748 "state": "completed" 00:19:26.748 }, 00:19:26.748 "cntlid": 27, 00:19:26.748 "listen_address": { 00:19:26.748 "adrfam": "IPv4", 00:19:26.748 "traddr": "10.0.0.2", 00:19:26.748 "trsvcid": "4420", 00:19:26.748 "trtype": "TCP" 00:19:26.748 }, 00:19:26.748 "peer_address": { 00:19:26.748 "adrfam": "IPv4", 00:19:26.748 "traddr": "10.0.0.1", 00:19:26.748 "trsvcid": "55512", 00:19:26.748 "trtype": "TCP" 00:19:26.748 }, 00:19:26.748 "qid": 0, 00:19:26.748 "state": "enabled" 00:19:26.748 } 00:19:26.748 ]' 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:26.748 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.006 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:27.006 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.006 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.007 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.265 10:00:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:19:27.832 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.832 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:27.832 10:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:27.832 10:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.833 10:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:27.833 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:27.833 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.833 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:28.094 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:28.666 00:19:28.667 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:28.667 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.667 10:00:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:28.924 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.924 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.924 10:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:28.924 10:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.924 10:00:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:28.924 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:28.924 { 00:19:28.924 "auth": { 00:19:28.924 "dhgroup": "ffdhe4096", 00:19:28.924 "digest": "sha256", 00:19:28.924 "state": "completed" 00:19:28.924 }, 00:19:28.924 "cntlid": 29, 00:19:28.924 "listen_address": { 00:19:28.924 "adrfam": "IPv4", 00:19:28.924 "traddr": "10.0.0.2", 00:19:28.924 "trsvcid": "4420", 00:19:28.924 "trtype": "TCP" 00:19:28.925 }, 00:19:28.925 "peer_address": { 00:19:28.925 "adrfam": "IPv4", 00:19:28.925 "traddr": "10.0.0.1", 00:19:28.925 "trsvcid": "55544", 00:19:28.925 "trtype": "TCP" 00:19:28.925 }, 00:19:28.925 "qid": 0, 00:19:28.925 "state": "enabled" 00:19:28.925 } 00:19:28.925 ]' 00:19:28.925 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:28.925 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.925 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:28.925 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.925 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:28.925 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.925 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.925 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.490 10:00:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:19:30.055 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.055 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:30.055 10:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.055 10:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.055 10:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.055 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:30.055 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.055 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.314 10:00:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.878 00:19:30.878 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:30.878 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:30.878 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:31.137 { 00:19:31.137 "auth": { 00:19:31.137 "dhgroup": "ffdhe4096", 00:19:31.137 "digest": "sha256", 00:19:31.137 "state": "completed" 00:19:31.137 }, 00:19:31.137 "cntlid": 31, 00:19:31.137 "listen_address": { 00:19:31.137 "adrfam": "IPv4", 00:19:31.137 "traddr": "10.0.0.2", 00:19:31.137 "trsvcid": "4420", 00:19:31.137 "trtype": "TCP" 00:19:31.137 }, 00:19:31.137 "peer_address": { 00:19:31.137 "adrfam": "IPv4", 00:19:31.137 "traddr": "10.0.0.1", 00:19:31.137 "trsvcid": "38852", 00:19:31.137 "trtype": "TCP" 00:19:31.137 }, 00:19:31.137 "qid": 0, 00:19:31.137 "state": "enabled" 00:19:31.137 } 00:19:31.137 ]' 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.137 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:31.396 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.396 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.396 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.655 10:00:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:32.223 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:32.788 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:19:32.788 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:32.789 10:00:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:33.047 00:19:33.047 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:33.047 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:33.047 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.305 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.305 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.305 10:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:33.305 10:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:33.562 { 00:19:33.562 "auth": { 00:19:33.562 "dhgroup": "ffdhe6144", 00:19:33.562 "digest": "sha256", 00:19:33.562 "state": "completed" 00:19:33.562 }, 00:19:33.562 "cntlid": 33, 00:19:33.562 "listen_address": { 00:19:33.562 "adrfam": "IPv4", 00:19:33.562 "traddr": "10.0.0.2", 00:19:33.562 "trsvcid": "4420", 00:19:33.562 "trtype": "TCP" 00:19:33.562 }, 00:19:33.562 "peer_address": { 00:19:33.562 "adrfam": "IPv4", 00:19:33.562 "traddr": "10.0.0.1", 00:19:33.562 "trsvcid": "38884", 00:19:33.562 "trtype": "TCP" 00:19:33.562 }, 00:19:33.562 "qid": 0, 00:19:33.562 "state": "enabled" 00:19:33.562 } 00:19:33.562 ]' 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.562 10:00:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.819 10:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:19:34.753 10:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.753 10:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:34.753 10:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:34.753 10:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.753 10:00:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:34.753 10:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:34.753 10:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.753 10:00:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:35.011 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:35.576 00:19:35.576 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:35.576 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.576 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:35.835 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.835 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.835 10:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:35.835 10:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.835 10:00:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:35.835 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:35.835 { 00:19:35.835 "auth": { 00:19:35.835 "dhgroup": "ffdhe6144", 00:19:35.835 "digest": "sha256", 00:19:35.835 "state": "completed" 00:19:35.835 }, 00:19:35.835 "cntlid": 35, 00:19:35.835 "listen_address": { 00:19:35.835 "adrfam": "IPv4", 00:19:35.835 "traddr": "10.0.0.2", 00:19:35.835 "trsvcid": "4420", 00:19:35.835 "trtype": "TCP" 00:19:35.835 }, 00:19:35.835 "peer_address": { 00:19:35.835 "adrfam": "IPv4", 00:19:35.835 "traddr": "10.0.0.1", 00:19:35.835 "trsvcid": "38894", 00:19:35.835 "trtype": "TCP" 00:19:35.835 }, 00:19:35.835 "qid": 0, 00:19:35.835 "state": "enabled" 00:19:35.835 } 00:19:35.835 ]' 00:19:35.835 10:00:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:35.835 10:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.835 10:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:35.835 10:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.835 10:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:35.835 10:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.835 10:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.836 10:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.096 10:00:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:19:37.041 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.041 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:37.041 10:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.041 10:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.041 10:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.041 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:37.041 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.041 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:37.304 10:00:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:37.870 00:19:37.870 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:37.870 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.870 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:38.128 { 00:19:38.128 "auth": { 00:19:38.128 "dhgroup": "ffdhe6144", 00:19:38.128 "digest": "sha256", 00:19:38.128 "state": "completed" 00:19:38.128 }, 00:19:38.128 "cntlid": 37, 00:19:38.128 "listen_address": { 00:19:38.128 "adrfam": "IPv4", 00:19:38.128 "traddr": "10.0.0.2", 00:19:38.128 "trsvcid": "4420", 00:19:38.128 "trtype": "TCP" 00:19:38.128 }, 00:19:38.128 "peer_address": { 00:19:38.128 "adrfam": "IPv4", 00:19:38.128 "traddr": "10.0.0.1", 00:19:38.128 "trsvcid": "38906", 00:19:38.128 "trtype": "TCP" 00:19:38.128 }, 00:19:38.128 "qid": 0, 00:19:38.128 "state": "enabled" 00:19:38.128 } 00:19:38.128 ]' 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.128 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.695 10:00:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:19:39.261 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.261 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:39.261 10:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.261 10:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.261 10:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.261 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:39.261 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.261 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.520 10:00:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.084 00:19:40.084 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:40.084 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:40.084 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:40.341 { 00:19:40.341 "auth": { 00:19:40.341 "dhgroup": "ffdhe6144", 00:19:40.341 "digest": "sha256", 00:19:40.341 "state": "completed" 00:19:40.341 }, 00:19:40.341 "cntlid": 39, 00:19:40.341 "listen_address": { 00:19:40.341 "adrfam": "IPv4", 00:19:40.341 "traddr": "10.0.0.2", 00:19:40.341 "trsvcid": "4420", 00:19:40.341 "trtype": "TCP" 00:19:40.341 }, 00:19:40.341 "peer_address": { 00:19:40.341 "adrfam": "IPv4", 00:19:40.341 "traddr": "10.0.0.1", 00:19:40.341 "trsvcid": "38934", 00:19:40.341 "trtype": "TCP" 00:19:40.341 }, 00:19:40.341 "qid": 0, 00:19:40.341 "state": "enabled" 00:19:40.341 } 00:19:40.341 ]' 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.341 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:40.598 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.598 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:40.598 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.598 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.598 10:00:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.856 10:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.420 10:00:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:41.679 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:42.243 00:19:42.500 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:42.500 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:42.500 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.757 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.757 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.757 10:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:42.757 10:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.757 10:00:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:42.757 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:42.757 { 00:19:42.757 "auth": { 00:19:42.757 "dhgroup": "ffdhe8192", 00:19:42.757 "digest": "sha256", 00:19:42.757 "state": "completed" 00:19:42.757 }, 00:19:42.757 "cntlid": 41, 00:19:42.757 "listen_address": { 00:19:42.757 "adrfam": "IPv4", 00:19:42.757 "traddr": "10.0.0.2", 00:19:42.757 "trsvcid": "4420", 00:19:42.757 "trtype": "TCP" 00:19:42.757 }, 00:19:42.757 "peer_address": { 00:19:42.757 "adrfam": "IPv4", 00:19:42.757 "traddr": "10.0.0.1", 00:19:42.757 "trsvcid": "58706", 00:19:42.757 "trtype": "TCP" 00:19:42.757 }, 00:19:42.757 "qid": 0, 00:19:42.757 "state": "enabled" 00:19:42.757 } 00:19:42.757 ]' 00:19:42.757 10:00:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:42.757 10:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.757 10:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:42.757 10:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.757 10:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:42.757 10:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.757 10:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.757 10:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.325 10:00:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:19:43.891 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.891 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:43.891 10:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:43.891 10:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.891 10:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:43.891 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:43.891 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.892 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:44.149 10:00:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:45.085 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:45.085 { 00:19:45.085 "auth": { 00:19:45.085 "dhgroup": "ffdhe8192", 00:19:45.085 "digest": "sha256", 00:19:45.085 "state": "completed" 00:19:45.085 }, 00:19:45.085 "cntlid": 43, 00:19:45.085 "listen_address": { 00:19:45.085 "adrfam": "IPv4", 00:19:45.085 "traddr": "10.0.0.2", 00:19:45.085 "trsvcid": "4420", 00:19:45.085 "trtype": "TCP" 00:19:45.085 }, 00:19:45.085 "peer_address": { 00:19:45.085 "adrfam": "IPv4", 00:19:45.085 "traddr": "10.0.0.1", 00:19:45.085 "trsvcid": "58740", 00:19:45.085 "trtype": "TCP" 00:19:45.085 }, 00:19:45.085 "qid": 0, 00:19:45.085 "state": "enabled" 00:19:45.085 } 00:19:45.085 ]' 00:19:45.085 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:45.343 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.343 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:45.343 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.343 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:45.343 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.343 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.343 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.602 10:00:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:19:46.551 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.551 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:46.551 10:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.551 10:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.551 10:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.551 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:46.551 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.551 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.820 10:00:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:47.415 00:19:47.415 10:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:47.415 10:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.415 10:00:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:48.006 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.006 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.006 10:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:48.006 10:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.006 10:00:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:48.006 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:48.006 { 00:19:48.006 "auth": { 00:19:48.006 "dhgroup": "ffdhe8192", 00:19:48.006 "digest": "sha256", 00:19:48.006 "state": "completed" 00:19:48.006 }, 00:19:48.006 "cntlid": 45, 00:19:48.006 "listen_address": { 00:19:48.006 "adrfam": "IPv4", 00:19:48.006 "traddr": "10.0.0.2", 00:19:48.006 "trsvcid": "4420", 00:19:48.006 "trtype": "TCP" 00:19:48.006 }, 00:19:48.006 "peer_address": { 00:19:48.006 "adrfam": "IPv4", 00:19:48.006 "traddr": "10.0.0.1", 00:19:48.006 "trsvcid": "58776", 00:19:48.006 "trtype": "TCP" 00:19:48.006 }, 00:19:48.006 "qid": 0, 00:19:48.006 "state": "enabled" 00:19:48.006 } 00:19:48.007 ]' 00:19:48.007 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:48.007 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.007 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:48.007 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.007 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:48.007 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.007 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.007 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.263 10:00:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:19:49.199 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.199 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:49.199 10:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.199 10:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.199 10:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.199 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:49.199 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.199 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.457 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:19:49.457 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:49.457 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.457 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.457 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.457 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:19:49.457 10:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:49.458 10:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.458 10:00:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:49.458 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.458 10:00:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.023 00:19:50.023 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:50.023 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.023 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:50.289 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.289 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.289 10:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:50.289 10:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.289 10:00:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:50.289 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:50.289 { 00:19:50.289 "auth": { 00:19:50.290 "dhgroup": "ffdhe8192", 00:19:50.290 "digest": "sha256", 00:19:50.290 "state": "completed" 00:19:50.290 }, 00:19:50.290 "cntlid": 47, 00:19:50.290 "listen_address": { 00:19:50.290 "adrfam": "IPv4", 00:19:50.290 "traddr": "10.0.0.2", 00:19:50.290 "trsvcid": "4420", 00:19:50.290 "trtype": "TCP" 00:19:50.290 }, 00:19:50.290 "peer_address": { 00:19:50.290 "adrfam": "IPv4", 00:19:50.290 "traddr": "10.0.0.1", 00:19:50.290 "trsvcid": "58798", 00:19:50.290 "trtype": "TCP" 00:19:50.290 }, 00:19:50.290 "qid": 0, 00:19:50.290 "state": "enabled" 00:19:50.290 } 00:19:50.290 ]' 00:19:50.290 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:50.548 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:50.548 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:50.548 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.548 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:50.548 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.549 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.549 10:00:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.813 10:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:51.749 10:00:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.007 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.264 00:19:52.264 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:52.264 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:52.264 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:52.523 { 00:19:52.523 "auth": { 00:19:52.523 "dhgroup": "null", 00:19:52.523 "digest": "sha384", 00:19:52.523 "state": "completed" 00:19:52.523 }, 00:19:52.523 "cntlid": 49, 00:19:52.523 "listen_address": { 00:19:52.523 "adrfam": "IPv4", 00:19:52.523 "traddr": "10.0.0.2", 00:19:52.523 "trsvcid": "4420", 00:19:52.523 "trtype": "TCP" 00:19:52.523 }, 00:19:52.523 "peer_address": { 00:19:52.523 "adrfam": "IPv4", 00:19:52.523 "traddr": "10.0.0.1", 00:19:52.523 "trsvcid": "57348", 00:19:52.523 "trtype": "TCP" 00:19:52.523 }, 00:19:52.523 "qid": 0, 00:19:52.523 "state": "enabled" 00:19:52.523 } 00:19:52.523 ]' 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.523 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:52.813 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:52.813 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:52.813 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.813 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.813 10:00:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.092 10:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:19:53.659 10:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.659 10:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:53.659 10:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.659 10:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.659 10:00:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.659 10:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:53.659 10:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.659 10:00:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:53.918 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:54.484 00:19:54.484 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:54.484 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.484 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:54.746 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.746 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.746 10:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.746 10:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.746 10:00:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.746 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:54.746 { 00:19:54.746 "auth": { 00:19:54.746 "dhgroup": "null", 00:19:54.746 "digest": "sha384", 00:19:54.746 "state": "completed" 00:19:54.746 }, 00:19:54.746 "cntlid": 51, 00:19:54.746 "listen_address": { 00:19:54.746 "adrfam": "IPv4", 00:19:54.746 "traddr": "10.0.0.2", 00:19:54.746 "trsvcid": "4420", 00:19:54.746 "trtype": "TCP" 00:19:54.746 }, 00:19:54.746 "peer_address": { 00:19:54.746 "adrfam": "IPv4", 00:19:54.746 "traddr": "10.0.0.1", 00:19:54.746 "trsvcid": "57374", 00:19:54.746 "trtype": "TCP" 00:19:54.746 }, 00:19:54.746 "qid": 0, 00:19:54.746 "state": "enabled" 00:19:54.746 } 00:19:54.746 ]' 00:19:54.746 10:00:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:54.746 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.746 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:54.746 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:54.746 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:54.746 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.746 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.746 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.005 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:19:55.951 10:00:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.951 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.209 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.209 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:56.209 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:56.467 00:19:56.467 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:56.467 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.467 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:56.725 { 00:19:56.725 "auth": { 00:19:56.725 "dhgroup": "null", 00:19:56.725 "digest": "sha384", 00:19:56.725 "state": "completed" 00:19:56.725 }, 00:19:56.725 "cntlid": 53, 00:19:56.725 "listen_address": { 00:19:56.725 "adrfam": "IPv4", 00:19:56.725 "traddr": "10.0.0.2", 00:19:56.725 "trsvcid": "4420", 00:19:56.725 "trtype": "TCP" 00:19:56.725 }, 00:19:56.725 "peer_address": { 00:19:56.725 "adrfam": "IPv4", 00:19:56.725 "traddr": "10.0.0.1", 00:19:56.725 "trsvcid": "57414", 00:19:56.725 "trtype": "TCP" 00:19:56.725 }, 00:19:56.725 "qid": 0, 00:19:56.725 "state": "enabled" 00:19:56.725 } 00:19:56.725 ]' 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.725 10:00:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:56.725 10:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:56.725 10:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:56.725 10:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.725 10:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.725 10:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.293 10:00:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:19:57.864 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.865 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:19:57.865 10:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.865 10:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.865 10:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.865 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:57.865 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.865 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.129 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.387 00:19:58.387 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:58.387 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:58.387 10:00:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.645 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.645 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.645 10:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.645 10:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:58.904 { 00:19:58.904 "auth": { 00:19:58.904 "dhgroup": "null", 00:19:58.904 "digest": "sha384", 00:19:58.904 "state": "completed" 00:19:58.904 }, 00:19:58.904 "cntlid": 55, 00:19:58.904 "listen_address": { 00:19:58.904 "adrfam": "IPv4", 00:19:58.904 "traddr": "10.0.0.2", 00:19:58.904 "trsvcid": "4420", 00:19:58.904 "trtype": "TCP" 00:19:58.904 }, 00:19:58.904 "peer_address": { 00:19:58.904 "adrfam": "IPv4", 00:19:58.904 "traddr": "10.0.0.1", 00:19:58.904 "trsvcid": "57440", 00:19:58.904 "trtype": "TCP" 00:19:58.904 }, 00:19:58.904 "qid": 0, 00:19:58.904 "state": "enabled" 00:19:58.904 } 00:19:58.904 ]' 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.904 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.175 10:00:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:19:59.741 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.000 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:00.000 10:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.000 10:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.000 10:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.000 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.000 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:00.000 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.000 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.257 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:00.258 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:00.515 00:20:00.515 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:00.515 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:00.515 10:00:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.772 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.772 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.772 10:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.772 10:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.772 10:00:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.772 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:00.772 { 00:20:00.772 "auth": { 00:20:00.772 "dhgroup": "ffdhe2048", 00:20:00.772 "digest": "sha384", 00:20:00.772 "state": "completed" 00:20:00.772 }, 00:20:00.772 "cntlid": 57, 00:20:00.772 "listen_address": { 00:20:00.772 "adrfam": "IPv4", 00:20:00.772 "traddr": "10.0.0.2", 00:20:00.772 "trsvcid": "4420", 00:20:00.772 "trtype": "TCP" 00:20:00.772 }, 00:20:00.772 "peer_address": { 00:20:00.772 "adrfam": "IPv4", 00:20:00.772 "traddr": "10.0.0.1", 00:20:00.772 "trsvcid": "57346", 00:20:00.772 "trtype": "TCP" 00:20:00.772 }, 00:20:00.772 "qid": 0, 00:20:00.772 "state": "enabled" 00:20:00.772 } 00:20:00.772 ]' 00:20:00.772 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:00.772 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.032 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:01.032 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.032 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:01.032 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.032 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.032 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.291 10:00:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:20:02.224 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.224 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:02.224 10:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.224 10:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.224 10:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.224 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:02.224 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.224 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.481 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:20:02.481 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:02.481 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.481 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.481 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:02.481 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:20:02.482 10:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.482 10:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.482 10:00:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.482 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:02.482 10:00:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:02.739 00:20:02.739 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:02.739 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:02.739 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.305 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.305 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.305 10:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.305 10:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.305 10:00:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.305 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:03.305 { 00:20:03.305 "auth": { 00:20:03.305 "dhgroup": "ffdhe2048", 00:20:03.305 "digest": "sha384", 00:20:03.305 "state": "completed" 00:20:03.305 }, 00:20:03.305 "cntlid": 59, 00:20:03.305 "listen_address": { 00:20:03.305 "adrfam": "IPv4", 00:20:03.305 "traddr": "10.0.0.2", 00:20:03.305 "trsvcid": "4420", 00:20:03.305 "trtype": "TCP" 00:20:03.305 }, 00:20:03.305 "peer_address": { 00:20:03.305 "adrfam": "IPv4", 00:20:03.305 "traddr": "10.0.0.1", 00:20:03.305 "trsvcid": "57392", 00:20:03.305 "trtype": "TCP" 00:20:03.305 }, 00:20:03.305 "qid": 0, 00:20:03.305 "state": "enabled" 00:20:03.305 } 00:20:03.305 ]' 00:20:03.305 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:03.306 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.306 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:03.306 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.306 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:03.306 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.306 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.306 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.563 10:00:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:20:04.495 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.495 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.496 10:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.755 10:00:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.755 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:04.755 10:00:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:05.014 00:20:05.014 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:05.014 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.014 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:05.272 { 00:20:05.272 "auth": { 00:20:05.272 "dhgroup": "ffdhe2048", 00:20:05.272 "digest": "sha384", 00:20:05.272 "state": "completed" 00:20:05.272 }, 00:20:05.272 "cntlid": 61, 00:20:05.272 "listen_address": { 00:20:05.272 "adrfam": "IPv4", 00:20:05.272 "traddr": "10.0.0.2", 00:20:05.272 "trsvcid": "4420", 00:20:05.272 "trtype": "TCP" 00:20:05.272 }, 00:20:05.272 "peer_address": { 00:20:05.272 "adrfam": "IPv4", 00:20:05.272 "traddr": "10.0.0.1", 00:20:05.272 "trsvcid": "57416", 00:20:05.272 "trtype": "TCP" 00:20:05.272 }, 00:20:05.272 "qid": 0, 00:20:05.272 "state": "enabled" 00:20:05.272 } 00:20:05.272 ]' 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.272 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:05.531 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.531 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:05.531 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.531 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.531 10:00:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.790 10:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:20:06.724 10:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.724 10:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:06.724 10:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.724 10:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.724 10:00:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.724 10:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:06.724 10:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.724 10:00:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.982 10:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.983 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.983 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.241 00:20:07.241 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:07.241 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.241 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:07.501 { 00:20:07.501 "auth": { 00:20:07.501 "dhgroup": "ffdhe2048", 00:20:07.501 "digest": "sha384", 00:20:07.501 "state": "completed" 00:20:07.501 }, 00:20:07.501 "cntlid": 63, 00:20:07.501 "listen_address": { 00:20:07.501 "adrfam": "IPv4", 00:20:07.501 "traddr": "10.0.0.2", 00:20:07.501 "trsvcid": "4420", 00:20:07.501 "trtype": "TCP" 00:20:07.501 }, 00:20:07.501 "peer_address": { 00:20:07.501 "adrfam": "IPv4", 00:20:07.501 "traddr": "10.0.0.1", 00:20:07.501 "trsvcid": "57442", 00:20:07.501 "trtype": "TCP" 00:20:07.501 }, 00:20:07.501 "qid": 0, 00:20:07.501 "state": "enabled" 00:20:07.501 } 00:20:07.501 ]' 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.501 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:07.758 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.758 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:07.758 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.758 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.758 10:00:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.017 10:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.951 10:00:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:08.951 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:09.516 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.516 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:09.517 { 00:20:09.517 "auth": { 00:20:09.517 "dhgroup": "ffdhe3072", 00:20:09.517 "digest": "sha384", 00:20:09.517 "state": "completed" 00:20:09.517 }, 00:20:09.517 "cntlid": 65, 00:20:09.517 "listen_address": { 00:20:09.517 "adrfam": "IPv4", 00:20:09.517 "traddr": "10.0.0.2", 00:20:09.517 "trsvcid": "4420", 00:20:09.517 "trtype": "TCP" 00:20:09.517 }, 00:20:09.517 "peer_address": { 00:20:09.517 "adrfam": "IPv4", 00:20:09.517 "traddr": "10.0.0.1", 00:20:09.517 "trsvcid": "57466", 00:20:09.517 "trtype": "TCP" 00:20:09.517 }, 00:20:09.517 "qid": 0, 00:20:09.517 "state": "enabled" 00:20:09.517 } 00:20:09.517 ]' 00:20:09.517 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:09.773 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.773 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:09.773 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.773 10:00:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:09.773 10:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.773 10:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.773 10:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.030 10:00:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:20:10.964 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.964 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:10.964 10:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.964 10:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.964 10:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.964 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:10.964 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.964 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.221 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:11.222 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:11.479 00:20:11.736 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:11.736 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:11.736 10:00:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:11.993 { 00:20:11.993 "auth": { 00:20:11.993 "dhgroup": "ffdhe3072", 00:20:11.993 "digest": "sha384", 00:20:11.993 "state": "completed" 00:20:11.993 }, 00:20:11.993 "cntlid": 67, 00:20:11.993 "listen_address": { 00:20:11.993 "adrfam": "IPv4", 00:20:11.993 "traddr": "10.0.0.2", 00:20:11.993 "trsvcid": "4420", 00:20:11.993 "trtype": "TCP" 00:20:11.993 }, 00:20:11.993 "peer_address": { 00:20:11.993 "adrfam": "IPv4", 00:20:11.993 "traddr": "10.0.0.1", 00:20:11.993 "trsvcid": "36666", 00:20:11.993 "trtype": "TCP" 00:20:11.993 }, 00:20:11.993 "qid": 0, 00:20:11.993 "state": "enabled" 00:20:11.993 } 00:20:11.993 ]' 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.993 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:12.250 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.250 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.250 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.529 10:00:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:20:13.093 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.093 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:13.093 10:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.093 10:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.093 10:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.093 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:13.093 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.093 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:13.352 10:00:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:13.918 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:13.918 { 00:20:13.918 "auth": { 00:20:13.918 "dhgroup": "ffdhe3072", 00:20:13.918 "digest": "sha384", 00:20:13.918 "state": "completed" 00:20:13.918 }, 00:20:13.918 "cntlid": 69, 00:20:13.918 "listen_address": { 00:20:13.918 "adrfam": "IPv4", 00:20:13.918 "traddr": "10.0.0.2", 00:20:13.918 "trsvcid": "4420", 00:20:13.918 "trtype": "TCP" 00:20:13.918 }, 00:20:13.918 "peer_address": { 00:20:13.918 "adrfam": "IPv4", 00:20:13.918 "traddr": "10.0.0.1", 00:20:13.918 "trsvcid": "36680", 00:20:13.918 "trtype": "TCP" 00:20:13.918 }, 00:20:13.918 "qid": 0, 00:20:13.918 "state": "enabled" 00:20:13.918 } 00:20:13.918 ]' 00:20:13.918 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:14.175 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.175 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:14.175 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.175 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:14.175 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.175 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.175 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.432 10:00:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:20:15.366 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.366 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:15.366 10:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.366 10:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.366 10:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.366 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:15.366 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.366 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.624 10:00:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.880 00:20:15.880 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:15.880 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:15.880 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:16.137 { 00:20:16.137 "auth": { 00:20:16.137 "dhgroup": "ffdhe3072", 00:20:16.137 "digest": "sha384", 00:20:16.137 "state": "completed" 00:20:16.137 }, 00:20:16.137 "cntlid": 71, 00:20:16.137 "listen_address": { 00:20:16.137 "adrfam": "IPv4", 00:20:16.137 "traddr": "10.0.0.2", 00:20:16.137 "trsvcid": "4420", 00:20:16.137 "trtype": "TCP" 00:20:16.137 }, 00:20:16.137 "peer_address": { 00:20:16.137 "adrfam": "IPv4", 00:20:16.137 "traddr": "10.0.0.1", 00:20:16.137 "trsvcid": "36688", 00:20:16.137 "trtype": "TCP" 00:20:16.137 }, 00:20:16.137 "qid": 0, 00:20:16.137 "state": "enabled" 00:20:16.137 } 00:20:16.137 ]' 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.137 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:16.395 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.395 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.395 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.653 10:00:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:20:17.224 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.224 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:17.224 10:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.224 10:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.224 10:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.224 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.224 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:17.225 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:17.225 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:17.483 10:00:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:18.050 00:20:18.050 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:18.050 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:18.050 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:18.309 { 00:20:18.309 "auth": { 00:20:18.309 "dhgroup": "ffdhe4096", 00:20:18.309 "digest": "sha384", 00:20:18.309 "state": "completed" 00:20:18.309 }, 00:20:18.309 "cntlid": 73, 00:20:18.309 "listen_address": { 00:20:18.309 "adrfam": "IPv4", 00:20:18.309 "traddr": "10.0.0.2", 00:20:18.309 "trsvcid": "4420", 00:20:18.309 "trtype": "TCP" 00:20:18.309 }, 00:20:18.309 "peer_address": { 00:20:18.309 "adrfam": "IPv4", 00:20:18.309 "traddr": "10.0.0.1", 00:20:18.309 "trsvcid": "36716", 00:20:18.309 "trtype": "TCP" 00:20:18.309 }, 00:20:18.309 "qid": 0, 00:20:18.309 "state": "enabled" 00:20:18.309 } 00:20:18.309 ]' 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.309 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.875 10:00:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:20:19.443 10:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.443 10:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:19.443 10:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.443 10:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.443 10:00:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.443 10:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:19.443 10:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.443 10:00:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:19.700 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:20.272 00:20:20.272 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:20.273 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.273 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:20.538 { 00:20:20.538 "auth": { 00:20:20.538 "dhgroup": "ffdhe4096", 00:20:20.538 "digest": "sha384", 00:20:20.538 "state": "completed" 00:20:20.538 }, 00:20:20.538 "cntlid": 75, 00:20:20.538 "listen_address": { 00:20:20.538 "adrfam": "IPv4", 00:20:20.538 "traddr": "10.0.0.2", 00:20:20.538 "trsvcid": "4420", 00:20:20.538 "trtype": "TCP" 00:20:20.538 }, 00:20:20.538 "peer_address": { 00:20:20.538 "adrfam": "IPv4", 00:20:20.538 "traddr": "10.0.0.1", 00:20:20.538 "trsvcid": "36746", 00:20:20.538 "trtype": "TCP" 00:20:20.538 }, 00:20:20.538 "qid": 0, 00:20:20.538 "state": "enabled" 00:20:20.538 } 00:20:20.538 ]' 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.538 10:00:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.796 10:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:20:21.771 10:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.771 10:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:21.771 10:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.771 10:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.771 10:00:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.771 10:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:21.771 10:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.771 10:00:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:21.771 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:22.338 00:20:22.338 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:22.338 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:22.338 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.596 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.596 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.596 10:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.596 10:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.596 10:00:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.596 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:22.596 { 00:20:22.596 "auth": { 00:20:22.596 "dhgroup": "ffdhe4096", 00:20:22.596 "digest": "sha384", 00:20:22.596 "state": "completed" 00:20:22.596 }, 00:20:22.596 "cntlid": 77, 00:20:22.596 "listen_address": { 00:20:22.596 "adrfam": "IPv4", 00:20:22.596 "traddr": "10.0.0.2", 00:20:22.596 "trsvcid": "4420", 00:20:22.596 "trtype": "TCP" 00:20:22.596 }, 00:20:22.596 "peer_address": { 00:20:22.596 "adrfam": "IPv4", 00:20:22.596 "traddr": "10.0.0.1", 00:20:22.596 "trsvcid": "41678", 00:20:22.596 "trtype": "TCP" 00:20:22.596 }, 00:20:22.596 "qid": 0, 00:20:22.596 "state": "enabled" 00:20:22.597 } 00:20:22.597 ]' 00:20:22.597 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:22.597 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.597 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:22.854 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.854 10:00:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:22.854 10:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.854 10:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.854 10:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.112 10:01:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:20:24.055 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:24.056 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.056 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:24.056 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.056 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:20:24.056 10:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.056 10:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.331 10:01:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.331 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.331 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.594 00:20:24.594 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:24.594 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:24.594 10:01:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.852 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.852 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.852 10:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.852 10:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.852 10:01:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.852 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:24.852 { 00:20:24.852 "auth": { 00:20:24.852 "dhgroup": "ffdhe4096", 00:20:24.852 "digest": "sha384", 00:20:24.852 "state": "completed" 00:20:24.852 }, 00:20:24.852 "cntlid": 79, 00:20:24.852 "listen_address": { 00:20:24.852 "adrfam": "IPv4", 00:20:24.852 "traddr": "10.0.0.2", 00:20:24.852 "trsvcid": "4420", 00:20:24.852 "trtype": "TCP" 00:20:24.852 }, 00:20:24.852 "peer_address": { 00:20:24.852 "adrfam": "IPv4", 00:20:24.852 "traddr": "10.0.0.1", 00:20:24.852 "trsvcid": "41714", 00:20:24.852 "trtype": "TCP" 00:20:24.852 }, 00:20:24.852 "qid": 0, 00:20:24.852 "state": "enabled" 00:20:24.852 } 00:20:24.852 ]' 00:20:24.852 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:25.110 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.110 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:25.110 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.110 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:25.110 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.110 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.110 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.369 10:01:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.307 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:26.565 10:01:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.131 00:20:27.131 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:27.131 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:27.131 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:27.389 { 00:20:27.389 "auth": { 00:20:27.389 "dhgroup": "ffdhe6144", 00:20:27.389 "digest": "sha384", 00:20:27.389 "state": "completed" 00:20:27.389 }, 00:20:27.389 "cntlid": 81, 00:20:27.389 "listen_address": { 00:20:27.389 "adrfam": "IPv4", 00:20:27.389 "traddr": "10.0.0.2", 00:20:27.389 "trsvcid": "4420", 00:20:27.389 "trtype": "TCP" 00:20:27.389 }, 00:20:27.389 "peer_address": { 00:20:27.389 "adrfam": "IPv4", 00:20:27.389 "traddr": "10.0.0.1", 00:20:27.389 "trsvcid": "41748", 00:20:27.389 "trtype": "TCP" 00:20:27.389 }, 00:20:27.389 "qid": 0, 00:20:27.389 "state": "enabled" 00:20:27.389 } 00:20:27.389 ]' 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.389 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:27.390 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.390 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.390 10:01:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.955 10:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:20:28.537 10:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.537 10:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:28.537 10:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.537 10:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.537 10:01:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.537 10:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:28.537 10:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.537 10:01:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:28.809 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:29.375 00:20:29.375 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:29.375 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.375 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:29.634 { 00:20:29.634 "auth": { 00:20:29.634 "dhgroup": "ffdhe6144", 00:20:29.634 "digest": "sha384", 00:20:29.634 "state": "completed" 00:20:29.634 }, 00:20:29.634 "cntlid": 83, 00:20:29.634 "listen_address": { 00:20:29.634 "adrfam": "IPv4", 00:20:29.634 "traddr": "10.0.0.2", 00:20:29.634 "trsvcid": "4420", 00:20:29.634 "trtype": "TCP" 00:20:29.634 }, 00:20:29.634 "peer_address": { 00:20:29.634 "adrfam": "IPv4", 00:20:29.634 "traddr": "10.0.0.1", 00:20:29.634 "trsvcid": "41780", 00:20:29.634 "trtype": "TCP" 00:20:29.634 }, 00:20:29.634 "qid": 0, 00:20:29.634 "state": "enabled" 00:20:29.634 } 00:20:29.634 ]' 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:29.634 10:01:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.634 10:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:29.895 10:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.895 10:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.895 10:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.159 10:01:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.129 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:31.130 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:31.736 00:20:31.736 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:31.736 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.736 10:01:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:32.044 { 00:20:32.044 "auth": { 00:20:32.044 "dhgroup": "ffdhe6144", 00:20:32.044 "digest": "sha384", 00:20:32.044 "state": "completed" 00:20:32.044 }, 00:20:32.044 "cntlid": 85, 00:20:32.044 "listen_address": { 00:20:32.044 "adrfam": "IPv4", 00:20:32.044 "traddr": "10.0.0.2", 00:20:32.044 "trsvcid": "4420", 00:20:32.044 "trtype": "TCP" 00:20:32.044 }, 00:20:32.044 "peer_address": { 00:20:32.044 "adrfam": "IPv4", 00:20:32.044 "traddr": "10.0.0.1", 00:20:32.044 "trsvcid": "59092", 00:20:32.044 "trtype": "TCP" 00:20:32.044 }, 00:20:32.044 "qid": 0, 00:20:32.044 "state": "enabled" 00:20:32.044 } 00:20:32.044 ]' 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.044 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.612 10:01:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:20:33.185 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.185 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:33.185 10:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.185 10:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.443 10:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.443 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:33.443 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.443 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.702 10:01:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.267 00:20:34.267 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:34.267 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:34.267 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:34.526 { 00:20:34.526 "auth": { 00:20:34.526 "dhgroup": "ffdhe6144", 00:20:34.526 "digest": "sha384", 00:20:34.526 "state": "completed" 00:20:34.526 }, 00:20:34.526 "cntlid": 87, 00:20:34.526 "listen_address": { 00:20:34.526 "adrfam": "IPv4", 00:20:34.526 "traddr": "10.0.0.2", 00:20:34.526 "trsvcid": "4420", 00:20:34.526 "trtype": "TCP" 00:20:34.526 }, 00:20:34.526 "peer_address": { 00:20:34.526 "adrfam": "IPv4", 00:20:34.526 "traddr": "10.0.0.1", 00:20:34.526 "trsvcid": "59128", 00:20:34.526 "trtype": "TCP" 00:20:34.526 }, 00:20:34.526 "qid": 0, 00:20:34.526 "state": "enabled" 00:20:34.526 } 00:20:34.526 ]' 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.526 10:01:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.785 10:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:20:35.719 10:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.720 10:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:35.720 10:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.720 10:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.720 10:01:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.720 10:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:35.720 10:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:35.720 10:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.720 10:01:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.984 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:20:35.984 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:35.984 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.984 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:35.985 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.985 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:20:35.985 10:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.985 10:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.985 10:01:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.985 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:35.985 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:36.552 00:20:36.552 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:36.552 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:36.553 10:01:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:36.811 { 00:20:36.811 "auth": { 00:20:36.811 "dhgroup": "ffdhe8192", 00:20:36.811 "digest": "sha384", 00:20:36.811 "state": "completed" 00:20:36.811 }, 00:20:36.811 "cntlid": 89, 00:20:36.811 "listen_address": { 00:20:36.811 "adrfam": "IPv4", 00:20:36.811 "traddr": "10.0.0.2", 00:20:36.811 "trsvcid": "4420", 00:20:36.811 "trtype": "TCP" 00:20:36.811 }, 00:20:36.811 "peer_address": { 00:20:36.811 "adrfam": "IPv4", 00:20:36.811 "traddr": "10.0.0.1", 00:20:36.811 "trsvcid": "59174", 00:20:36.811 "trtype": "TCP" 00:20:36.811 }, 00:20:36.811 "qid": 0, 00:20:36.811 "state": "enabled" 00:20:36.811 } 00:20:36.811 ]' 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.811 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:37.070 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.070 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:37.070 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.070 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.070 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.328 10:01:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:20:38.260 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.260 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:38.260 10:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.260 10:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.260 10:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.260 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:38.260 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.260 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:38.518 10:01:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:39.082 00:20:39.082 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:39.082 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:39.082 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.344 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.344 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.345 10:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.345 10:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.345 10:01:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.345 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:39.345 { 00:20:39.345 "auth": { 00:20:39.345 "dhgroup": "ffdhe8192", 00:20:39.345 "digest": "sha384", 00:20:39.345 "state": "completed" 00:20:39.345 }, 00:20:39.345 "cntlid": 91, 00:20:39.345 "listen_address": { 00:20:39.345 "adrfam": "IPv4", 00:20:39.345 "traddr": "10.0.0.2", 00:20:39.345 "trsvcid": "4420", 00:20:39.345 "trtype": "TCP" 00:20:39.345 }, 00:20:39.345 "peer_address": { 00:20:39.345 "adrfam": "IPv4", 00:20:39.345 "traddr": "10.0.0.1", 00:20:39.345 "trsvcid": "59206", 00:20:39.345 "trtype": "TCP" 00:20:39.345 }, 00:20:39.345 "qid": 0, 00:20:39.345 "state": "enabled" 00:20:39.345 } 00:20:39.345 ]' 00:20:39.345 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:39.602 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.602 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:39.602 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.602 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:39.602 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.602 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.602 10:01:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.860 10:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:20:40.795 10:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.795 10:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:40.795 10:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.795 10:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.795 10:01:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.795 10:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:40.795 10:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.795 10:01:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:40.795 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:41.729 00:20:41.729 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:41.729 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:41.729 10:01:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:41.986 { 00:20:41.986 "auth": { 00:20:41.986 "dhgroup": "ffdhe8192", 00:20:41.986 "digest": "sha384", 00:20:41.986 "state": "completed" 00:20:41.986 }, 00:20:41.986 "cntlid": 93, 00:20:41.986 "listen_address": { 00:20:41.986 "adrfam": "IPv4", 00:20:41.986 "traddr": "10.0.0.2", 00:20:41.986 "trsvcid": "4420", 00:20:41.986 "trtype": "TCP" 00:20:41.986 }, 00:20:41.986 "peer_address": { 00:20:41.986 "adrfam": "IPv4", 00:20:41.986 "traddr": "10.0.0.1", 00:20:41.986 "trsvcid": "41712", 00:20:41.986 "trtype": "TCP" 00:20:41.986 }, 00:20:41.986 "qid": 0, 00:20:41.986 "state": "enabled" 00:20:41.986 } 00:20:41.986 ]' 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.986 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.587 10:01:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:20:43.206 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.206 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:43.206 10:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.206 10:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.206 10:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.206 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:43.206 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.206 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.463 10:01:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.395 00:20:44.395 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:44.395 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:44.395 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:44.719 { 00:20:44.719 "auth": { 00:20:44.719 "dhgroup": "ffdhe8192", 00:20:44.719 "digest": "sha384", 00:20:44.719 "state": "completed" 00:20:44.719 }, 00:20:44.719 "cntlid": 95, 00:20:44.719 "listen_address": { 00:20:44.719 "adrfam": "IPv4", 00:20:44.719 "traddr": "10.0.0.2", 00:20:44.719 "trsvcid": "4420", 00:20:44.719 "trtype": "TCP" 00:20:44.719 }, 00:20:44.719 "peer_address": { 00:20:44.719 "adrfam": "IPv4", 00:20:44.719 "traddr": "10.0.0.1", 00:20:44.719 "trsvcid": "41750", 00:20:44.719 "trtype": "TCP" 00:20:44.719 }, 00:20:44.719 "qid": 0, 00:20:44.719 "state": "enabled" 00:20:44.719 } 00:20:44.719 ]' 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.719 10:01:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.976 10:01:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.906 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:46.204 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:46.769 00:20:46.769 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:46.769 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.769 10:01:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:46.769 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.769 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.769 10:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.769 10:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.769 10:01:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.769 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:46.769 { 00:20:46.769 "auth": { 00:20:46.769 "dhgroup": "null", 00:20:46.769 "digest": "sha512", 00:20:46.769 "state": "completed" 00:20:46.769 }, 00:20:46.769 "cntlid": 97, 00:20:46.769 "listen_address": { 00:20:46.769 "adrfam": "IPv4", 00:20:46.769 "traddr": "10.0.0.2", 00:20:46.769 "trsvcid": "4420", 00:20:46.769 "trtype": "TCP" 00:20:46.769 }, 00:20:46.769 "peer_address": { 00:20:46.769 "adrfam": "IPv4", 00:20:46.769 "traddr": "10.0.0.1", 00:20:46.769 "trsvcid": "41784", 00:20:46.769 "trtype": "TCP" 00:20:46.769 }, 00:20:46.769 "qid": 0, 00:20:46.769 "state": "enabled" 00:20:46.769 } 00:20:46.769 ]' 00:20:47.027 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:47.027 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.027 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:47.027 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:47.027 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:47.027 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.027 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.027 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.284 10:01:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:20:48.216 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.216 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:48.216 10:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.216 10:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.216 10:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.216 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:48.216 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.216 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:48.472 10:01:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:48.730 00:20:48.986 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:48.986 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.986 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:49.243 { 00:20:49.243 "auth": { 00:20:49.243 "dhgroup": "null", 00:20:49.243 "digest": "sha512", 00:20:49.243 "state": "completed" 00:20:49.243 }, 00:20:49.243 "cntlid": 99, 00:20:49.243 "listen_address": { 00:20:49.243 "adrfam": "IPv4", 00:20:49.243 "traddr": "10.0.0.2", 00:20:49.243 "trsvcid": "4420", 00:20:49.243 "trtype": "TCP" 00:20:49.243 }, 00:20:49.243 "peer_address": { 00:20:49.243 "adrfam": "IPv4", 00:20:49.243 "traddr": "10.0.0.1", 00:20:49.243 "trsvcid": "41808", 00:20:49.243 "trtype": "TCP" 00:20:49.243 }, 00:20:49.243 "qid": 0, 00:20:49.243 "state": "enabled" 00:20:49.243 } 00:20:49.243 ]' 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:49.243 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:49.500 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.500 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.500 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.757 10:01:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:20:50.324 10:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.583 10:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:50.583 10:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.583 10:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.583 10:01:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.583 10:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:50.583 10:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.583 10:01:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:50.841 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:51.098 00:20:51.098 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:51.098 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.098 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:51.355 { 00:20:51.355 "auth": { 00:20:51.355 "dhgroup": "null", 00:20:51.355 "digest": "sha512", 00:20:51.355 "state": "completed" 00:20:51.355 }, 00:20:51.355 "cntlid": 101, 00:20:51.355 "listen_address": { 00:20:51.355 "adrfam": "IPv4", 00:20:51.355 "traddr": "10.0.0.2", 00:20:51.355 "trsvcid": "4420", 00:20:51.355 "trtype": "TCP" 00:20:51.355 }, 00:20:51.355 "peer_address": { 00:20:51.355 "adrfam": "IPv4", 00:20:51.355 "traddr": "10.0.0.1", 00:20:51.355 "trsvcid": "51658", 00:20:51.355 "trtype": "TCP" 00:20:51.355 }, 00:20:51.355 "qid": 0, 00:20:51.355 "state": "enabled" 00:20:51.355 } 00:20:51.355 ]' 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.355 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:51.612 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:51.612 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:51.612 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.612 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.612 10:01:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.870 10:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:20:52.826 10:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.826 10:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:52.826 10:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.826 10:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.826 10:01:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.826 10:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:52.826 10:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.826 10:01:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.086 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.344 00:20:53.344 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:53.344 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:53.344 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.640 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.640 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.640 10:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.641 10:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.641 10:01:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.641 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:53.641 { 00:20:53.641 "auth": { 00:20:53.641 "dhgroup": "null", 00:20:53.641 "digest": "sha512", 00:20:53.641 "state": "completed" 00:20:53.641 }, 00:20:53.641 "cntlid": 103, 00:20:53.641 "listen_address": { 00:20:53.641 "adrfam": "IPv4", 00:20:53.641 "traddr": "10.0.0.2", 00:20:53.641 "trsvcid": "4420", 00:20:53.641 "trtype": "TCP" 00:20:53.641 }, 00:20:53.641 "peer_address": { 00:20:53.641 "adrfam": "IPv4", 00:20:53.641 "traddr": "10.0.0.1", 00:20:53.641 "trsvcid": "51696", 00:20:53.641 "trtype": "TCP" 00:20:53.641 }, 00:20:53.641 "qid": 0, 00:20:53.641 "state": "enabled" 00:20:53.641 } 00:20:53.641 ]' 00:20:53.641 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:53.641 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.641 10:01:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:53.641 10:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:53.641 10:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:53.899 10:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.899 10:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.899 10:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.157 10:01:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:54.720 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.284 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.541 00:20:55.541 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:55.541 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:55.541 10:01:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.799 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.799 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.799 10:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.799 10:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:56.057 { 00:20:56.057 "auth": { 00:20:56.057 "dhgroup": "ffdhe2048", 00:20:56.057 "digest": "sha512", 00:20:56.057 "state": "completed" 00:20:56.057 }, 00:20:56.057 "cntlid": 105, 00:20:56.057 "listen_address": { 00:20:56.057 "adrfam": "IPv4", 00:20:56.057 "traddr": "10.0.0.2", 00:20:56.057 "trsvcid": "4420", 00:20:56.057 "trtype": "TCP" 00:20:56.057 }, 00:20:56.057 "peer_address": { 00:20:56.057 "adrfam": "IPv4", 00:20:56.057 "traddr": "10.0.0.1", 00:20:56.057 "trsvcid": "51724", 00:20:56.057 "trtype": "TCP" 00:20:56.057 }, 00:20:56.057 "qid": 0, 00:20:56.057 "state": "enabled" 00:20:56.057 } 00:20:56.057 ]' 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.057 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.314 10:01:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:20:57.247 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.247 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:57.247 10:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.247 10:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.247 10:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.247 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:57.247 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.247 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.504 10:01:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.760 00:20:57.760 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:57.760 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.760 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:58.017 { 00:20:58.017 "auth": { 00:20:58.017 "dhgroup": "ffdhe2048", 00:20:58.017 "digest": "sha512", 00:20:58.017 "state": "completed" 00:20:58.017 }, 00:20:58.017 "cntlid": 107, 00:20:58.017 "listen_address": { 00:20:58.017 "adrfam": "IPv4", 00:20:58.017 "traddr": "10.0.0.2", 00:20:58.017 "trsvcid": "4420", 00:20:58.017 "trtype": "TCP" 00:20:58.017 }, 00:20:58.017 "peer_address": { 00:20:58.017 "adrfam": "IPv4", 00:20:58.017 "traddr": "10.0.0.1", 00:20:58.017 "trsvcid": "51756", 00:20:58.017 "trtype": "TCP" 00:20:58.017 }, 00:20:58.017 "qid": 0, 00:20:58.017 "state": "enabled" 00:20:58.017 } 00:20:58.017 ]' 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.017 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:58.275 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.275 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.275 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.533 10:01:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:20:59.123 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.123 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:20:59.123 10:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.123 10:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.123 10:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.123 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:59.123 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.123 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.381 10:01:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.946 00:20:59.946 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:59.946 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:59.946 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:00.205 { 00:21:00.205 "auth": { 00:21:00.205 "dhgroup": "ffdhe2048", 00:21:00.205 "digest": "sha512", 00:21:00.205 "state": "completed" 00:21:00.205 }, 00:21:00.205 "cntlid": 109, 00:21:00.205 "listen_address": { 00:21:00.205 "adrfam": "IPv4", 00:21:00.205 "traddr": "10.0.0.2", 00:21:00.205 "trsvcid": "4420", 00:21:00.205 "trtype": "TCP" 00:21:00.205 }, 00:21:00.205 "peer_address": { 00:21:00.205 "adrfam": "IPv4", 00:21:00.205 "traddr": "10.0.0.1", 00:21:00.205 "trsvcid": "51792", 00:21:00.205 "trtype": "TCP" 00:21:00.205 }, 00:21:00.205 "qid": 0, 00:21:00.205 "state": "enabled" 00:21:00.205 } 00:21:00.205 ]' 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.205 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.770 10:01:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:21:01.334 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.334 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:01.334 10:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.334 10:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.334 10:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.334 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:01.334 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.334 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.592 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:21:01.592 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:01.592 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.592 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:01.592 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.592 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:21:01.592 10:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.592 10:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.850 10:01:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.850 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.850 10:01:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.109 00:21:02.109 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:02.109 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:02.109 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:02.365 { 00:21:02.365 "auth": { 00:21:02.365 "dhgroup": "ffdhe2048", 00:21:02.365 "digest": "sha512", 00:21:02.365 "state": "completed" 00:21:02.365 }, 00:21:02.365 "cntlid": 111, 00:21:02.365 "listen_address": { 00:21:02.365 "adrfam": "IPv4", 00:21:02.365 "traddr": "10.0.0.2", 00:21:02.365 "trsvcid": "4420", 00:21:02.365 "trtype": "TCP" 00:21:02.365 }, 00:21:02.365 "peer_address": { 00:21:02.365 "adrfam": "IPv4", 00:21:02.365 "traddr": "10.0.0.1", 00:21:02.365 "trsvcid": "48342", 00:21:02.365 "trtype": "TCP" 00:21:02.365 }, 00:21:02.365 "qid": 0, 00:21:02.365 "state": "enabled" 00:21:02.365 } 00:21:02.365 ]' 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.365 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:02.622 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.622 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.622 10:01:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.879 10:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.445 10:01:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:04.011 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:04.269 00:21:04.269 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:04.269 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.269 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:04.527 { 00:21:04.527 "auth": { 00:21:04.527 "dhgroup": "ffdhe3072", 00:21:04.527 "digest": "sha512", 00:21:04.527 "state": "completed" 00:21:04.527 }, 00:21:04.527 "cntlid": 113, 00:21:04.527 "listen_address": { 00:21:04.527 "adrfam": "IPv4", 00:21:04.527 "traddr": "10.0.0.2", 00:21:04.527 "trsvcid": "4420", 00:21:04.527 "trtype": "TCP" 00:21:04.527 }, 00:21:04.527 "peer_address": { 00:21:04.527 "adrfam": "IPv4", 00:21:04.527 "traddr": "10.0.0.1", 00:21:04.527 "trsvcid": "48362", 00:21:04.527 "trtype": "TCP" 00:21:04.527 }, 00:21:04.527 "qid": 0, 00:21:04.527 "state": "enabled" 00:21:04.527 } 00:21:04.527 ]' 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.527 10:01:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.093 10:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:21:05.659 10:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.659 10:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:05.659 10:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.659 10:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.659 10:01:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.659 10:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:05.659 10:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.659 10:01:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:05.917 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:06.484 00:21:06.484 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:06.484 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.484 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:06.743 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.743 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.743 10:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.743 10:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.743 10:01:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.743 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:06.743 { 00:21:06.743 "auth": { 00:21:06.743 "dhgroup": "ffdhe3072", 00:21:06.743 "digest": "sha512", 00:21:06.743 "state": "completed" 00:21:06.743 }, 00:21:06.743 "cntlid": 115, 00:21:06.743 "listen_address": { 00:21:06.743 "adrfam": "IPv4", 00:21:06.743 "traddr": "10.0.0.2", 00:21:06.743 "trsvcid": "4420", 00:21:06.743 "trtype": "TCP" 00:21:06.743 }, 00:21:06.743 "peer_address": { 00:21:06.743 "adrfam": "IPv4", 00:21:06.743 "traddr": "10.0.0.1", 00:21:06.743 "trsvcid": "48394", 00:21:06.743 "trtype": "TCP" 00:21:06.743 }, 00:21:06.743 "qid": 0, 00:21:06.743 "state": "enabled" 00:21:06.743 } 00:21:06.743 ]' 00:21:06.743 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:06.743 10:01:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.743 10:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:06.743 10:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.743 10:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:06.743 10:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.743 10:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.743 10:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.310 10:01:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:08.291 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:08.856 00:21:08.856 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:08.856 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.856 10:01:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:09.114 { 00:21:09.114 "auth": { 00:21:09.114 "dhgroup": "ffdhe3072", 00:21:09.114 "digest": "sha512", 00:21:09.114 "state": "completed" 00:21:09.114 }, 00:21:09.114 "cntlid": 117, 00:21:09.114 "listen_address": { 00:21:09.114 "adrfam": "IPv4", 00:21:09.114 "traddr": "10.0.0.2", 00:21:09.114 "trsvcid": "4420", 00:21:09.114 "trtype": "TCP" 00:21:09.114 }, 00:21:09.114 "peer_address": { 00:21:09.114 "adrfam": "IPv4", 00:21:09.114 "traddr": "10.0.0.1", 00:21:09.114 "trsvcid": "48428", 00:21:09.114 "trtype": "TCP" 00:21:09.114 }, 00:21:09.114 "qid": 0, 00:21:09.114 "state": "enabled" 00:21:09.114 } 00:21:09.114 ]' 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.114 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:09.371 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.371 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.371 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.627 10:01:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:21:10.193 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.193 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:10.193 10:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.193 10:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.193 10:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.193 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:10.193 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.193 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.452 10:01:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.026 00:21:11.026 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:11.026 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:11.026 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:11.284 { 00:21:11.284 "auth": { 00:21:11.284 "dhgroup": "ffdhe3072", 00:21:11.284 "digest": "sha512", 00:21:11.284 "state": "completed" 00:21:11.284 }, 00:21:11.284 "cntlid": 119, 00:21:11.284 "listen_address": { 00:21:11.284 "adrfam": "IPv4", 00:21:11.284 "traddr": "10.0.0.2", 00:21:11.284 "trsvcid": "4420", 00:21:11.284 "trtype": "TCP" 00:21:11.284 }, 00:21:11.284 "peer_address": { 00:21:11.284 "adrfam": "IPv4", 00:21:11.284 "traddr": "10.0.0.1", 00:21:11.284 "trsvcid": "42814", 00:21:11.284 "trtype": "TCP" 00:21:11.284 }, 00:21:11.284 "qid": 0, 00:21:11.284 "state": "enabled" 00:21:11.284 } 00:21:11.284 ]' 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.284 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:11.542 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.542 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:11.542 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.542 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.542 10:01:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.800 10:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.366 10:01:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:12.931 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:13.189 00:21:13.189 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:13.189 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:13.189 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:13.755 { 00:21:13.755 "auth": { 00:21:13.755 "dhgroup": "ffdhe4096", 00:21:13.755 "digest": "sha512", 00:21:13.755 "state": "completed" 00:21:13.755 }, 00:21:13.755 "cntlid": 121, 00:21:13.755 "listen_address": { 00:21:13.755 "adrfam": "IPv4", 00:21:13.755 "traddr": "10.0.0.2", 00:21:13.755 "trsvcid": "4420", 00:21:13.755 "trtype": "TCP" 00:21:13.755 }, 00:21:13.755 "peer_address": { 00:21:13.755 "adrfam": "IPv4", 00:21:13.755 "traddr": "10.0.0.1", 00:21:13.755 "trsvcid": "42862", 00:21:13.755 "trtype": "TCP" 00:21:13.755 }, 00:21:13.755 "qid": 0, 00:21:13.755 "state": "enabled" 00:21:13.755 } 00:21:13.755 ]' 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.755 10:01:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:13.755 10:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.755 10:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:13.755 10:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.755 10:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.755 10:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.321 10:01:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:21:14.887 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.887 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:14.887 10:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.887 10:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.887 10:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.887 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:14.887 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:14.887 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.144 10:01:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.145 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:15.145 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:15.710 00:21:15.710 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:15.710 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:15.710 10:01:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:15.973 { 00:21:15.973 "auth": { 00:21:15.973 "dhgroup": "ffdhe4096", 00:21:15.973 "digest": "sha512", 00:21:15.973 "state": "completed" 00:21:15.973 }, 00:21:15.973 "cntlid": 123, 00:21:15.973 "listen_address": { 00:21:15.973 "adrfam": "IPv4", 00:21:15.973 "traddr": "10.0.0.2", 00:21:15.973 "trsvcid": "4420", 00:21:15.973 "trtype": "TCP" 00:21:15.973 }, 00:21:15.973 "peer_address": { 00:21:15.973 "adrfam": "IPv4", 00:21:15.973 "traddr": "10.0.0.1", 00:21:15.973 "trsvcid": "42898", 00:21:15.973 "trtype": "TCP" 00:21:15.973 }, 00:21:15.973 "qid": 0, 00:21:15.973 "state": "enabled" 00:21:15.973 } 00:21:15.973 ]' 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.973 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:16.232 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.232 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.232 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.490 10:01:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:21:17.424 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.424 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.425 10:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.684 10:01:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.684 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:17.684 10:01:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:17.943 00:21:17.943 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:17.943 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:17.943 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.201 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.201 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.201 10:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.201 10:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.201 10:01:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.201 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:18.201 { 00:21:18.201 "auth": { 00:21:18.201 "dhgroup": "ffdhe4096", 00:21:18.201 "digest": "sha512", 00:21:18.201 "state": "completed" 00:21:18.201 }, 00:21:18.201 "cntlid": 125, 00:21:18.201 "listen_address": { 00:21:18.201 "adrfam": "IPv4", 00:21:18.201 "traddr": "10.0.0.2", 00:21:18.201 "trsvcid": "4420", 00:21:18.201 "trtype": "TCP" 00:21:18.201 }, 00:21:18.201 "peer_address": { 00:21:18.201 "adrfam": "IPv4", 00:21:18.201 "traddr": "10.0.0.1", 00:21:18.201 "trsvcid": "42934", 00:21:18.201 "trtype": "TCP" 00:21:18.201 }, 00:21:18.201 "qid": 0, 00:21:18.201 "state": "enabled" 00:21:18.201 } 00:21:18.201 ]' 00:21:18.201 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:18.459 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.459 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:18.459 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.459 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:18.459 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.459 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.459 10:01:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.717 10:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:21:19.650 10:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.650 10:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:19.650 10:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.650 10:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.650 10:01:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.650 10:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:19.650 10:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.650 10:01:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.961 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.234 00:21:20.234 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:20.234 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.234 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:20.493 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.493 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.493 10:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.493 10:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.493 10:01:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.493 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:20.493 { 00:21:20.493 "auth": { 00:21:20.493 "dhgroup": "ffdhe4096", 00:21:20.493 "digest": "sha512", 00:21:20.493 "state": "completed" 00:21:20.493 }, 00:21:20.493 "cntlid": 127, 00:21:20.493 "listen_address": { 00:21:20.493 "adrfam": "IPv4", 00:21:20.493 "traddr": "10.0.0.2", 00:21:20.493 "trsvcid": "4420", 00:21:20.493 "trtype": "TCP" 00:21:20.493 }, 00:21:20.493 "peer_address": { 00:21:20.493 "adrfam": "IPv4", 00:21:20.493 "traddr": "10.0.0.1", 00:21:20.493 "trsvcid": "59440", 00:21:20.493 "trtype": "TCP" 00:21:20.493 }, 00:21:20.493 "qid": 0, 00:21:20.493 "state": "enabled" 00:21:20.493 } 00:21:20.493 ]' 00:21:20.493 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:20.751 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.751 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:20.751 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.751 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:20.751 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.751 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.751 10:01:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.010 10:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:21:21.578 10:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.578 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.578 10:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:21.578 10:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.578 10:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.837 10:01:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.837 10:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.837 10:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:21.837 10:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:21.837 10:01:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:21.837 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:21:21.837 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:21.837 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.837 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:21.837 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:21.837 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:21:21.837 10:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.837 10:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.097 10:01:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.097 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:22.097 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:22.422 00:21:22.422 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:22.422 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:22.422 10:01:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.989 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.989 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.989 10:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.989 10:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.989 10:02:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.989 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:22.989 { 00:21:22.989 "auth": { 00:21:22.989 "dhgroup": "ffdhe6144", 00:21:22.989 "digest": "sha512", 00:21:22.990 "state": "completed" 00:21:22.990 }, 00:21:22.990 "cntlid": 129, 00:21:22.990 "listen_address": { 00:21:22.990 "adrfam": "IPv4", 00:21:22.990 "traddr": "10.0.0.2", 00:21:22.990 "trsvcid": "4420", 00:21:22.990 "trtype": "TCP" 00:21:22.990 }, 00:21:22.990 "peer_address": { 00:21:22.990 "adrfam": "IPv4", 00:21:22.990 "traddr": "10.0.0.1", 00:21:22.990 "trsvcid": "59466", 00:21:22.990 "trtype": "TCP" 00:21:22.990 }, 00:21:22.990 "qid": 0, 00:21:22.990 "state": "enabled" 00:21:22.990 } 00:21:22.990 ]' 00:21:22.990 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:22.990 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.990 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:22.990 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.990 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:22.990 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.990 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.990 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.249 10:02:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:21:24.184 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.185 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:24.185 10:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.185 10:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.185 10:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.185 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:24.185 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.185 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.443 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:21:24.443 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:24.443 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.443 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:24.443 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.443 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:21:24.444 10:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.444 10:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.444 10:02:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.444 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:24.444 10:02:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:24.703 00:21:24.703 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:24.703 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:24.703 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.961 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.961 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.961 10:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.961 10:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:25.228 { 00:21:25.228 "auth": { 00:21:25.228 "dhgroup": "ffdhe6144", 00:21:25.228 "digest": "sha512", 00:21:25.228 "state": "completed" 00:21:25.228 }, 00:21:25.228 "cntlid": 131, 00:21:25.228 "listen_address": { 00:21:25.228 "adrfam": "IPv4", 00:21:25.228 "traddr": "10.0.0.2", 00:21:25.228 "trsvcid": "4420", 00:21:25.228 "trtype": "TCP" 00:21:25.228 }, 00:21:25.228 "peer_address": { 00:21:25.228 "adrfam": "IPv4", 00:21:25.228 "traddr": "10.0.0.1", 00:21:25.228 "trsvcid": "59500", 00:21:25.228 "trtype": "TCP" 00:21:25.228 }, 00:21:25.228 "qid": 0, 00:21:25.228 "state": "enabled" 00:21:25.228 } 00:21:25.228 ]' 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.228 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.522 10:02:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:21:26.086 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.086 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:26.086 10:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.086 10:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.086 10:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.086 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:26.086 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.086 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:26.344 10:02:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:26.911 00:21:26.911 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:26.911 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:26.911 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:27.170 { 00:21:27.170 "auth": { 00:21:27.170 "dhgroup": "ffdhe6144", 00:21:27.170 "digest": "sha512", 00:21:27.170 "state": "completed" 00:21:27.170 }, 00:21:27.170 "cntlid": 133, 00:21:27.170 "listen_address": { 00:21:27.170 "adrfam": "IPv4", 00:21:27.170 "traddr": "10.0.0.2", 00:21:27.170 "trsvcid": "4420", 00:21:27.170 "trtype": "TCP" 00:21:27.170 }, 00:21:27.170 "peer_address": { 00:21:27.170 "adrfam": "IPv4", 00:21:27.170 "traddr": "10.0.0.1", 00:21:27.170 "trsvcid": "59526", 00:21:27.170 "trtype": "TCP" 00:21:27.170 }, 00:21:27.170 "qid": 0, 00:21:27.170 "state": "enabled" 00:21:27.170 } 00:21:27.170 ]' 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.170 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.736 10:02:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:21:28.302 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.302 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:28.302 10:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.302 10:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.302 10:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.302 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:28.302 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.302 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.868 10:02:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.434 00:21:29.434 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:29.434 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:29.434 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:29.693 { 00:21:29.693 "auth": { 00:21:29.693 "dhgroup": "ffdhe6144", 00:21:29.693 "digest": "sha512", 00:21:29.693 "state": "completed" 00:21:29.693 }, 00:21:29.693 "cntlid": 135, 00:21:29.693 "listen_address": { 00:21:29.693 "adrfam": "IPv4", 00:21:29.693 "traddr": "10.0.0.2", 00:21:29.693 "trsvcid": "4420", 00:21:29.693 "trtype": "TCP" 00:21:29.693 }, 00:21:29.693 "peer_address": { 00:21:29.693 "adrfam": "IPv4", 00:21:29.693 "traddr": "10.0.0.1", 00:21:29.693 "trsvcid": "59544", 00:21:29.693 "trtype": "TCP" 00:21:29.693 }, 00:21:29.693 "qid": 0, 00:21:29.693 "state": "enabled" 00:21:29.693 } 00:21:29.693 ]' 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.693 10:02:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:29.693 10:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.693 10:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.693 10:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.267 10:02:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.832 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:31.092 10:02:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:32.027 00:21:32.027 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:32.027 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:32.027 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.027 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.027 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.027 10:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:32.027 10:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:32.285 { 00:21:32.285 "auth": { 00:21:32.285 "dhgroup": "ffdhe8192", 00:21:32.285 "digest": "sha512", 00:21:32.285 "state": "completed" 00:21:32.285 }, 00:21:32.285 "cntlid": 137, 00:21:32.285 "listen_address": { 00:21:32.285 "adrfam": "IPv4", 00:21:32.285 "traddr": "10.0.0.2", 00:21:32.285 "trsvcid": "4420", 00:21:32.285 "trtype": "TCP" 00:21:32.285 }, 00:21:32.285 "peer_address": { 00:21:32.285 "adrfam": "IPv4", 00:21:32.285 "traddr": "10.0.0.1", 00:21:32.285 "trsvcid": "58750", 00:21:32.285 "trtype": "TCP" 00:21:32.285 }, 00:21:32.285 "qid": 0, 00:21:32.285 "state": "enabled" 00:21:32.285 } 00:21:32.285 ]' 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.285 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.543 10:02:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:21:33.479 10:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.479 10:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:33.479 10:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.479 10:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.479 10:02:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.479 10:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:33.479 10:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.479 10:02:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:33.737 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:34.302 00:21:34.560 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:34.560 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.560 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:34.885 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.885 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.885 10:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.885 10:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.885 10:02:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.885 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:34.885 { 00:21:34.885 "auth": { 00:21:34.885 "dhgroup": "ffdhe8192", 00:21:34.885 "digest": "sha512", 00:21:34.885 "state": "completed" 00:21:34.885 }, 00:21:34.885 "cntlid": 139, 00:21:34.885 "listen_address": { 00:21:34.885 "adrfam": "IPv4", 00:21:34.885 "traddr": "10.0.0.2", 00:21:34.885 "trsvcid": "4420", 00:21:34.885 "trtype": "TCP" 00:21:34.885 }, 00:21:34.885 "peer_address": { 00:21:34.885 "adrfam": "IPv4", 00:21:34.885 "traddr": "10.0.0.1", 00:21:34.885 "trsvcid": "58778", 00:21:34.885 "trtype": "TCP" 00:21:34.885 }, 00:21:34.885 "qid": 0, 00:21:34.885 "state": "enabled" 00:21:34.885 } 00:21:34.885 ]' 00:21:34.885 10:02:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:34.885 10:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.885 10:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:34.885 10:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.885 10:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:34.885 10:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.885 10:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.885 10:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.146 10:02:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:01:ODViYjgyMTI0NWU4MWUwYWEwMjI0Mzk4NGZlYTVjNGK8bk3f: 00:21:36.081 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.081 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:36.081 10:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.081 10:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.081 10:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.081 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:36.081 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.081 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key2 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:36.339 10:02:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:36.905 00:21:36.905 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:36.905 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.905 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:37.227 { 00:21:37.227 "auth": { 00:21:37.227 "dhgroup": "ffdhe8192", 00:21:37.227 "digest": "sha512", 00:21:37.227 "state": "completed" 00:21:37.227 }, 00:21:37.227 "cntlid": 141, 00:21:37.227 "listen_address": { 00:21:37.227 "adrfam": "IPv4", 00:21:37.227 "traddr": "10.0.0.2", 00:21:37.227 "trsvcid": "4420", 00:21:37.227 "trtype": "TCP" 00:21:37.227 }, 00:21:37.227 "peer_address": { 00:21:37.227 "adrfam": "IPv4", 00:21:37.227 "traddr": "10.0.0.1", 00:21:37.227 "trsvcid": "58810", 00:21:37.227 "trtype": "TCP" 00:21:37.227 }, 00:21:37.227 "qid": 0, 00:21:37.227 "state": "enabled" 00:21:37.227 } 00:21:37.227 ]' 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.227 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:37.503 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.503 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:37.503 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.503 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.503 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.768 10:02:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:02:NGEzYWQ4MjMwZjg3NzE2NzhmOGYxYTc1NzZjMGYzNjQxN2VhZWU3Nzg2N2YwMGQ1csW7fA==: 00:21:38.338 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.338 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:38.338 10:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.338 10:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.338 10:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.338 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:38.338 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.338 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.904 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:21:38.904 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:38.904 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.904 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:38.904 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:38.904 10:02:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key3 00:21:38.904 10:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.904 10:02:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.904 10:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.904 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:38.904 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.470 00:21:39.470 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:39.470 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:39.470 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.728 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.728 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.728 10:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.728 10:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.728 10:02:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.728 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:39.728 { 00:21:39.728 "auth": { 00:21:39.728 "dhgroup": "ffdhe8192", 00:21:39.728 "digest": "sha512", 00:21:39.728 "state": "completed" 00:21:39.728 }, 00:21:39.728 "cntlid": 143, 00:21:39.728 "listen_address": { 00:21:39.728 "adrfam": "IPv4", 00:21:39.728 "traddr": "10.0.0.2", 00:21:39.728 "trsvcid": "4420", 00:21:39.728 "trtype": "TCP" 00:21:39.728 }, 00:21:39.728 "peer_address": { 00:21:39.728 "adrfam": "IPv4", 00:21:39.728 "traddr": "10.0.0.1", 00:21:39.728 "trsvcid": "58848", 00:21:39.728 "trtype": "TCP" 00:21:39.728 }, 00:21:39.729 "qid": 0, 00:21:39.729 "state": "enabled" 00:21:39.729 } 00:21:39.729 ]' 00:21:39.729 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:39.729 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.729 10:02:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:39.729 10:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:39.729 10:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:39.729 10:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.729 10:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.729 10:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.987 10:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:03:YmU2NTFlNWRlNDZjODE2ZTRhZTA3OTI5ZTY5NmY5Y2E3MTA0NjJmYzYxZjA3YWVlMDIwMWUwOGRiN2M2MjIyY6G8ByA=: 00:21:40.921 10:02:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:40.921 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key0 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.179 10:02:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:41.745 00:21:41.745 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:41.745 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.745 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:42.003 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.003 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.003 10:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:42.003 10:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.003 10:02:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:42.003 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:42.003 { 00:21:42.003 "auth": { 00:21:42.003 "dhgroup": "ffdhe8192", 00:21:42.004 "digest": "sha512", 00:21:42.004 "state": "completed" 00:21:42.004 }, 00:21:42.004 "cntlid": 145, 00:21:42.004 "listen_address": { 00:21:42.004 "adrfam": "IPv4", 00:21:42.004 "traddr": "10.0.0.2", 00:21:42.004 "trsvcid": "4420", 00:21:42.004 "trtype": "TCP" 00:21:42.004 }, 00:21:42.004 "peer_address": { 00:21:42.004 "adrfam": "IPv4", 00:21:42.004 "traddr": "10.0.0.1", 00:21:42.004 "trsvcid": "50110", 00:21:42.004 "trtype": "TCP" 00:21:42.004 }, 00:21:42.004 "qid": 0, 00:21:42.004 "state": "enabled" 00:21:42.004 } 00:21:42.004 ]' 00:21:42.004 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:42.004 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.004 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:42.004 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.004 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:42.262 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.262 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.262 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.521 10:02:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid 8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-secret DHHC-1:00:MDBjNGMxZTBkMzkwNzM1Y2YxNzE4MGQwMDMxMTZjNTg5NmYyZGYwNjI2NzJjMGVlsc6dzg==: 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --dhchap-key key1 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:43.087 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:43.088 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:43.088 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:43.088 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:43.088 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:43.088 10:02:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:43.088 10:02:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.081 2024/05/15 10:02:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:44.081 request: 00:21:44.081 { 00:21:44.081 "method": "bdev_nvme_attach_controller", 00:21:44.081 "params": { 00:21:44.081 "name": "nvme0", 00:21:44.081 "trtype": "tcp", 00:21:44.081 "traddr": "10.0.0.2", 00:21:44.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4", 00:21:44.081 "adrfam": "ipv4", 00:21:44.081 "trsvcid": "4420", 00:21:44.081 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:44.081 "dhchap_key": "key2" 00:21:44.081 } 00:21:44.081 } 00:21:44.081 Got JSON-RPC error response 00:21:44.081 GoRPCClient: error on JSON-RPC call 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77705 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 77705 ']' 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 77705 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 77705 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 77705' 00:21:44.081 killing process with pid 77705 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 77705 00:21:44.081 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 77705 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.674 rmmod nvme_tcp 00:21:44.674 rmmod nvme_fabrics 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 77661 ']' 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 77661 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 77661 ']' 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 77661 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 77661 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 77661' 00:21:44.674 killing process with pid 77661 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 77661 00:21:44.674 10:02:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 77661 00:21:44.931 10:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.931 10:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.931 10:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.931 10:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.931 10:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.931 10:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.931 10:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.931 10:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.188 10:02:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:45.188 10:02:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.66H /tmp/spdk.key-sha256.Ew8 /tmp/spdk.key-sha384.WGq /tmp/spdk.key-sha512.YF4 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:21:45.188 ************************************ 00:21:45.188 END TEST nvmf_auth_target 00:21:45.188 ************************************ 00:21:45.188 00:21:45.188 real 2m55.282s 00:21:45.188 user 6m58.143s 00:21:45.188 sys 0m30.923s 00:21:45.188 10:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:45.188 10:02:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.188 10:02:22 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:45.188 10:02:22 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:45.188 10:02:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:21:45.188 10:02:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:45.188 10:02:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:45.188 ************************************ 00:21:45.188 START TEST nvmf_bdevio_no_huge 00:21:45.188 ************************************ 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:45.188 * Looking for test storage... 00:21:45.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:45.188 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:45.445 Cannot find device "nvmf_tgt_br" 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:45.445 Cannot find device "nvmf_tgt_br2" 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:45.445 Cannot find device "nvmf_tgt_br" 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:45.445 Cannot find device "nvmf_tgt_br2" 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:21:45.445 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:45.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:45.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:45.446 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:45.703 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:45.703 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:45.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:21:45.704 00:21:45.704 --- 10.0.0.2 ping statistics --- 00:21:45.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.704 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:45.704 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:45.704 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:21:45.704 00:21:45.704 --- 10.0.0.3 ping statistics --- 00:21:45.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.704 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:45.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:21:45.704 00:21:45.704 --- 10.0.0.1 ping statistics --- 00:21:45.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.704 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=82772 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 82772 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 82772 ']' 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:45.704 10:02:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.704 [2024-05-15 10:02:23.046022] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:21:45.704 [2024-05-15 10:02:23.046426] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:45.962 [2024-05-15 10:02:23.207043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.219 [2024-05-15 10:02:23.406074] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.219 [2024-05-15 10:02:23.406522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.219 [2024-05-15 10:02:23.406758] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.219 [2024-05-15 10:02:23.407031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.219 [2024-05-15 10:02:23.407201] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.219 [2024-05-15 10:02:23.408656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:46.219 [2024-05-15 10:02:23.408758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:46.219 [2024-05-15 10:02:23.408869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:46.219 [2024-05-15 10:02:23.408927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:46.856 [2024-05-15 10:02:24.101879] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:46.856 Malloc0 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:46.856 [2024-05-15 10:02:24.148686] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:46.856 [2024-05-15 10:02:24.149735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.856 { 00:21:46.856 "params": { 00:21:46.856 "name": "Nvme$subsystem", 00:21:46.856 "trtype": "$TEST_TRANSPORT", 00:21:46.856 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.856 "adrfam": "ipv4", 00:21:46.856 "trsvcid": "$NVMF_PORT", 00:21:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.856 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.856 "hdgst": ${hdgst:-false}, 00:21:46.856 "ddgst": ${ddgst:-false} 00:21:46.856 }, 00:21:46.856 "method": "bdev_nvme_attach_controller" 00:21:46.856 } 00:21:46.856 EOF 00:21:46.856 )") 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:46.856 10:02:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:46.856 "params": { 00:21:46.856 "name": "Nvme1", 00:21:46.856 "trtype": "tcp", 00:21:46.856 "traddr": "10.0.0.2", 00:21:46.856 "adrfam": "ipv4", 00:21:46.856 "trsvcid": "4420", 00:21:46.856 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.856 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.856 "hdgst": false, 00:21:46.856 "ddgst": false 00:21:46.856 }, 00:21:46.856 "method": "bdev_nvme_attach_controller" 00:21:46.856 }' 00:21:46.856 [2024-05-15 10:02:24.206796] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:21:46.857 [2024-05-15 10:02:24.207148] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82833 ] 00:21:47.114 [2024-05-15 10:02:24.366235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:47.373 [2024-05-15 10:02:24.527540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.373 [2024-05-15 10:02:24.527639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.373 [2024-05-15 10:02:24.527625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.373 I/O targets: 00:21:47.373 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:47.373 00:21:47.373 00:21:47.373 CUnit - A unit testing framework for C - Version 2.1-3 00:21:47.373 http://cunit.sourceforge.net/ 00:21:47.373 00:21:47.373 00:21:47.373 Suite: bdevio tests on: Nvme1n1 00:21:47.631 Test: blockdev write read block ...passed 00:21:47.631 Test: blockdev write zeroes read block ...passed 00:21:47.631 Test: blockdev write zeroes read no split ...passed 00:21:47.631 Test: blockdev write zeroes read split ...passed 00:21:47.631 Test: blockdev write zeroes read split partial ...passed 00:21:47.631 Test: blockdev reset ...[2024-05-15 10:02:24.880722] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.631 [2024-05-15 10:02:24.881055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7c360 (9): Bad file descriptor 00:21:47.631 [2024-05-15 10:02:24.893587] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:47.631 passed 00:21:47.631 Test: blockdev write read 8 blocks ...passed 00:21:47.631 Test: blockdev write read size > 128k ...passed 00:21:47.631 Test: blockdev write read invalid size ...passed 00:21:47.631 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:47.631 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:47.631 Test: blockdev write read max offset ...passed 00:21:47.889 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:47.889 Test: blockdev writev readv 8 blocks ...passed 00:21:47.889 Test: blockdev writev readv 30 x 1block ...passed 00:21:47.889 Test: blockdev writev readv block ...passed 00:21:47.889 Test: blockdev writev readv size > 128k ...passed 00:21:47.889 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:47.889 Test: blockdev comparev and writev ...[2024-05-15 10:02:25.111538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:47.889 [2024-05-15 10:02:25.111777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.111923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:47.889 [2024-05-15 10:02:25.112031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.112494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:47.889 [2024-05-15 10:02:25.112630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.112793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:47.889 [2024-05-15 10:02:25.112954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.113459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:47.889 [2024-05-15 10:02:25.113669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.113866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:47.889 [2024-05-15 10:02:25.113996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.114461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:47.889 [2024-05-15 10:02:25.114602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.114752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:47.889 [2024-05-15 10:02:25.114911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:47.889 passed 00:21:47.889 Test: blockdev nvme passthru rw ...passed 00:21:47.889 Test: blockdev nvme passthru vendor specific ...[2024-05-15 10:02:25.197683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.889 [2024-05-15 10:02:25.197748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.197873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.889 [2024-05-15 10:02:25.197977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.198144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.889 [2024-05-15 10:02:25.198253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:47.889 [2024-05-15 10:02:25.198530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.889 [2024-05-15 10:02:25.198674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:47.889 passed 00:21:47.889 Test: blockdev nvme admin passthru ...passed 00:21:47.889 Test: blockdev copy ...passed 00:21:47.889 00:21:47.889 Run Summary: Type Total Ran Passed Failed Inactive 00:21:47.889 suites 1 1 n/a 0 0 00:21:47.889 tests 23 23 23 0 0 00:21:47.889 asserts 152 152 152 0 n/a 00:21:47.889 00:21:47.889 Elapsed time = 1.030 seconds 00:21:48.453 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:48.453 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:48.453 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:48.453 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:48.453 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:48.453 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:48.453 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:48.453 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:48.711 rmmod nvme_tcp 00:21:48.711 rmmod nvme_fabrics 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 82772 ']' 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 82772 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 82772 ']' 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 82772 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 82772 00:21:48.711 killing process with pid 82772 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 82772' 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 82772 00:21:48.711 [2024-05-15 10:02:25.949106] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:48.711 10:02:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 82772 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:49.282 ************************************ 00:21:49.282 END TEST nvmf_bdevio_no_huge 00:21:49.282 ************************************ 00:21:49.282 00:21:49.282 real 0m4.167s 00:21:49.282 user 0m14.440s 00:21:49.282 sys 0m1.795s 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:49.282 10:02:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.282 10:02:26 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:49.282 10:02:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:49.282 10:02:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:49.282 10:02:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.282 ************************************ 00:21:49.282 START TEST nvmf_tls 00:21:49.282 ************************************ 00:21:49.282 10:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:49.542 * Looking for test storage... 00:21:49.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:49.542 Cannot find device "nvmf_tgt_br" 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:49.542 Cannot find device "nvmf_tgt_br2" 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:49.542 Cannot find device "nvmf_tgt_br" 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:49.542 Cannot find device "nvmf_tgt_br2" 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:49.542 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:49.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.543 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:21:49.543 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:49.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.543 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:21:49.543 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:49.801 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:49.801 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:49.801 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:49.801 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:49.801 10:02:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:49.801 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:49.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:21:49.802 00:21:49.802 --- 10.0.0.2 ping statistics --- 00:21:49.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.802 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:49.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:49.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:21:49.802 00:21:49.802 --- 10.0.0.3 ping statistics --- 00:21:49.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.802 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:49.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:49.802 00:21:49.802 --- 10.0.0.1 ping statistics --- 00:21:49.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.802 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.802 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83026 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83026 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83026 ']' 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:50.059 10:02:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.060 [2024-05-15 10:02:27.280393] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:21:50.060 [2024-05-15 10:02:27.280813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.060 [2024-05-15 10:02:27.430011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.318 [2024-05-15 10:02:27.589511] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.318 [2024-05-15 10:02:27.589789] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.318 [2024-05-15 10:02:27.589897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.318 [2024-05-15 10:02:27.589985] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.318 [2024-05-15 10:02:27.590017] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.318 [2024-05-15 10:02:27.590114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.884 10:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:50.884 10:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:21:50.884 10:02:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.884 10:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:50.884 10:02:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.143 10:02:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.143 10:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:51.143 10:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:51.401 true 00:21:51.401 10:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:51.401 10:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:51.659 10:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:51.659 10:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:51.659 10:02:28 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:51.917 10:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:51.917 10:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:52.174 10:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:52.174 10:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:52.174 10:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:52.432 10:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:52.432 10:02:29 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:52.690 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:52.690 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:52.690 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:52.690 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:52.947 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:52.947 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:52.947 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:53.213 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:53.213 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:53.779 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:53.779 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:53.779 10:02:30 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:54.038 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:54.038 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.aZUL0oRbjJ 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.BkXrnvaIy9 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.aZUL0oRbjJ 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BkXrnvaIy9 00:21:54.297 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:54.556 10:02:31 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:55.123 10:02:32 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.aZUL0oRbjJ 00:21:55.123 10:02:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aZUL0oRbjJ 00:21:55.123 10:02:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:55.123 [2024-05-15 10:02:32.497588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.380 10:02:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:55.638 10:02:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:55.897 [2024-05-15 10:02:33.145690] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:55.897 [2024-05-15 10:02:33.146105] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:55.897 [2024-05-15 10:02:33.146477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.897 10:02:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:56.170 malloc0 00:21:56.170 10:02:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:56.455 10:02:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aZUL0oRbjJ 00:21:56.713 [2024-05-15 10:02:34.046568] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:56.713 10:02:34 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aZUL0oRbjJ 00:22:08.911 Initializing NVMe Controllers 00:22:08.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:08.911 Initialization complete. Launching workers. 00:22:08.911 ======================================================== 00:22:08.911 Latency(us) 00:22:08.911 Device Information : IOPS MiB/s Average min max 00:22:08.911 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11476.39 44.83 5577.23 1532.70 10073.78 00:22:08.911 ======================================================== 00:22:08.911 Total : 11476.39 44.83 5577.23 1532.70 10073.78 00:22:08.911 00:22:08.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aZUL0oRbjJ 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aZUL0oRbjJ' 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83387 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83387 /var/tmp/bdevperf.sock 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83387 ']' 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.911 [2024-05-15 10:02:44.331780] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:08.911 [2024-05-15 10:02:44.332175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83387 ] 00:22:08.911 [2024-05-15 10:02:44.470433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.911 [2024-05-15 10:02:44.633478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:08.911 10:02:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aZUL0oRbjJ 00:22:08.911 [2024-05-15 10:02:45.037072] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:08.911 [2024-05-15 10:02:45.037760] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:08.911 TLSTESTn1 00:22:08.911 10:02:45 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:08.911 Running I/O for 10 seconds... 00:22:18.880 00:22:18.880 Latency(us) 00:22:18.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.881 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:18.881 Verification LBA range: start 0x0 length 0x2000 00:22:18.881 TLSTESTn1 : 10.02 4382.55 17.12 0.00 0.00 29152.05 5867.03 23343.30 00:22:18.881 =================================================================================================================== 00:22:18.881 Total : 4382.55 17.12 0.00 0.00 29152.05 5867.03 23343.30 00:22:18.881 0 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83387 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83387 ']' 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83387 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83387 00:22:18.881 killing process with pid 83387 00:22:18.881 Received shutdown signal, test time was about 10.000000 seconds 00:22:18.881 00:22:18.881 Latency(us) 00:22:18.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.881 =================================================================================================================== 00:22:18.881 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83387' 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83387 00:22:18.881 [2024-05-15 10:02:55.334733] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83387 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BkXrnvaIy9 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BkXrnvaIy9 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:18.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BkXrnvaIy9 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BkXrnvaIy9' 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83524 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83524 /var/tmp/bdevperf.sock 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83524 ']' 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:18.881 10:02:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.881 [2024-05-15 10:02:55.780590] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:18.881 [2024-05-15 10:02:55.781565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83524 ] 00:22:18.881 [2024-05-15 10:02:55.923151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.881 [2024-05-15 10:02:56.135222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.814 10:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:19.814 10:02:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:19.814 10:02:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BkXrnvaIy9 00:22:19.814 [2024-05-15 10:02:57.189619] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:19.814 [2024-05-15 10:02:57.190575] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:19.814 [2024-05-15 10:02:57.196558] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spd[2024-05-15 10:02:57.196687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b0a40 (107): Transport endpoint is not connected 00:22:19.814 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:20.073 [2024-05-15 10:02:57.197675] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b0a40 (9): Bad file descriptor 00:22:20.073 [2024-05-15 10:02:57.198667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:20.073 [2024-05-15 10:02:57.198945] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:20.073 [2024-05-15 10:02:57.199185] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:20.073 2024/05/15 10:02:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.BkXrnvaIy9 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:20.073 request: 00:22:20.073 { 00:22:20.073 "method": "bdev_nvme_attach_controller", 00:22:20.073 "params": { 00:22:20.073 "name": "TLSTEST", 00:22:20.073 "trtype": "tcp", 00:22:20.073 "traddr": "10.0.0.2", 00:22:20.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.073 "adrfam": "ipv4", 00:22:20.073 "trsvcid": "4420", 00:22:20.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.073 "psk": "/tmp/tmp.BkXrnvaIy9" 00:22:20.073 } 00:22:20.073 } 00:22:20.073 Got JSON-RPC error response 00:22:20.073 GoRPCClient: error on JSON-RPC call 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83524 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83524 ']' 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83524 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83524 00:22:20.073 killing process with pid 83524 00:22:20.073 Received shutdown signal, test time was about 10.000000 seconds 00:22:20.073 00:22:20.073 Latency(us) 00:22:20.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.073 =================================================================================================================== 00:22:20.073 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83524' 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83524 00:22:20.073 [2024-05-15 10:02:57.277035] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:20.073 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83524 00:22:20.331 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:20.331 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:20.331 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:20.331 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:20.331 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:20.331 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aZUL0oRbjJ 00:22:20.331 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:20.331 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aZUL0oRbjJ 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aZUL0oRbjJ 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aZUL0oRbjJ' 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83570 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83570 /var/tmp/bdevperf.sock 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83570 ']' 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:20.332 10:02:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.590 [2024-05-15 10:02:57.729301] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:20.590 [2024-05-15 10:02:57.729722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83570 ] 00:22:20.590 [2024-05-15 10:02:57.870942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.848 [2024-05-15 10:02:58.056038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.803 10:02:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:21.803 10:02:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:21.803 10:02:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.aZUL0oRbjJ 00:22:21.803 [2024-05-15 10:02:59.142335] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.803 [2024-05-15 10:02:59.143311] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:21.803 [2024-05-15 10:02:59.148814] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:21.803 [2024-05-15 10:02:59.149124] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:21.803 [2024-05-15 10:02:59.149304] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:21.803 [2024-05-15 10:02:59.149785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x851a40 (107): Transport endpoint is not connected 00:22:21.803 [2024-05-15 10:02:59.150754] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x851a40 (9): Bad file descriptor 00:22:21.803 [2024-05-15 10:02:59.151747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:21.803 [2024-05-15 10:02:59.151973] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:21.803 [2024-05-15 10:02:59.152193] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:21.803 2024/05/15 10:02:59 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.aZUL0oRbjJ subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:21.803 request: 00:22:21.803 { 00:22:21.803 "method": "bdev_nvme_attach_controller", 00:22:21.803 "params": { 00:22:21.803 "name": "TLSTEST", 00:22:21.803 "trtype": "tcp", 00:22:21.803 "traddr": "10.0.0.2", 00:22:21.803 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:21.803 "adrfam": "ipv4", 00:22:21.803 "trsvcid": "4420", 00:22:21.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.803 "psk": "/tmp/tmp.aZUL0oRbjJ" 00:22:21.803 } 00:22:21.803 } 00:22:21.803 Got JSON-RPC error response 00:22:21.803 GoRPCClient: error on JSON-RPC call 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83570 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83570 ']' 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83570 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83570 00:22:22.061 killing process with pid 83570 00:22:22.061 Received shutdown signal, test time was about 10.000000 seconds 00:22:22.061 00:22:22.061 Latency(us) 00:22:22.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:22.061 =================================================================================================================== 00:22:22.061 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83570' 00:22:22.061 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83570 00:22:22.061 [2024-05-15 10:02:59.217469] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:22.062 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83570 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aZUL0oRbjJ 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aZUL0oRbjJ 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aZUL0oRbjJ 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aZUL0oRbjJ' 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83621 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83621 /var/tmp/bdevperf.sock 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83621 ']' 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:22.321 10:02:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.321 [2024-05-15 10:02:59.688472] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:22.321 [2024-05-15 10:02:59.688883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83621 ] 00:22:22.580 [2024-05-15 10:02:59.838213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.838 [2024-05-15 10:03:00.003815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.403 10:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:23.403 10:03:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:23.403 10:03:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aZUL0oRbjJ 00:22:23.662 [2024-05-15 10:03:00.981382] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.662 [2024-05-15 10:03:00.982362] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:23.662 [2024-05-15 10:03:00.988316] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:23.662 [2024-05-15 10:03:00.988627] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:23.662 [2024-05-15 10:03:00.988823] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:23.662 [2024-05-15 10:03:00.989284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431a40 (107): Transport endpoint is not connected 00:22:23.662 [2024-05-15 10:03:00.990254] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2431a40 (9): Bad file descriptor 00:22:23.662 [2024-05-15 10:03:00.991250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:23.662 [2024-05-15 10:03:00.991498] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:23.662 [2024-05-15 10:03:00.991686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:23.662 2024/05/15 10:03:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.aZUL0oRbjJ subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:23.662 request: 00:22:23.662 { 00:22:23.662 "method": "bdev_nvme_attach_controller", 00:22:23.662 "params": { 00:22:23.662 "name": "TLSTEST", 00:22:23.662 "trtype": "tcp", 00:22:23.662 "traddr": "10.0.0.2", 00:22:23.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.662 "adrfam": "ipv4", 00:22:23.662 "trsvcid": "4420", 00:22:23.662 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:23.662 "psk": "/tmp/tmp.aZUL0oRbjJ" 00:22:23.662 } 00:22:23.662 } 00:22:23.662 Got JSON-RPC error response 00:22:23.662 GoRPCClient: error on JSON-RPC call 00:22:23.662 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83621 00:22:23.662 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83621 ']' 00:22:23.662 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83621 00:22:23.662 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:23.662 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:23.662 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83621 00:22:23.920 killing process with pid 83621 00:22:23.920 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.920 00:22:23.920 Latency(us) 00:22:23.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.920 =================================================================================================================== 00:22:23.920 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:23.920 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:23.920 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:23.920 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83621' 00:22:23.920 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83621 00:22:23.920 [2024-05-15 10:03:01.058836] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:23.920 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83621 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83667 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83667 /var/tmp/bdevperf.sock 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83667 ']' 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:24.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:24.178 10:03:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.178 [2024-05-15 10:03:01.506115] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:24.178 [2024-05-15 10:03:01.507058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83667 ] 00:22:24.436 [2024-05-15 10:03:01.649270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.694 [2024-05-15 10:03:01.825525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.260 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:25.260 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:25.260 10:03:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:25.518 [2024-05-15 10:03:02.846799] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:25.518 [2024-05-15 10:03:02.848624] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xedfa00 (9): Bad file descriptor 00:22:25.518 [2024-05-15 10:03:02.849612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:25.518 [2024-05-15 10:03:02.849931] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:25.518 [2024-05-15 10:03:02.850164] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.518 2024/05/15 10:03:02 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:22:25.518 request: 00:22:25.518 { 00:22:25.518 "method": "bdev_nvme_attach_controller", 00:22:25.518 "params": { 00:22:25.518 "name": "TLSTEST", 00:22:25.518 "trtype": "tcp", 00:22:25.518 "traddr": "10.0.0.2", 00:22:25.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:25.518 "adrfam": "ipv4", 00:22:25.518 "trsvcid": "4420", 00:22:25.518 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:22:25.518 } 00:22:25.518 } 00:22:25.518 Got JSON-RPC error response 00:22:25.518 GoRPCClient: error on JSON-RPC call 00:22:25.518 10:03:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83667 00:22:25.518 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83667 ']' 00:22:25.518 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83667 00:22:25.518 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:25.518 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:25.518 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83667 00:22:25.776 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:25.776 killing process with pid 83667 00:22:25.776 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.776 00:22:25.776 Latency(us) 00:22:25.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.776 =================================================================================================================== 00:22:25.776 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:25.776 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:25.776 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83667' 00:22:25.776 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83667 00:22:25.776 10:03:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83667 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83026 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83026 ']' 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83026 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83026 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83026' 00:22:26.033 killing process with pid 83026 00:22:26.033 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83026 00:22:26.033 [2024-05-15 10:03:03.329570] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83026 00:22:26.033 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:26.033 [2024-05-15 10:03:03.329851] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:26.597 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:26.597 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.IQwxcjDtbv 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.IQwxcjDtbv 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83728 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83728 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83728 ']' 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:26.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:26.598 10:03:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.598 [2024-05-15 10:03:03.870801] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:26.598 [2024-05-15 10:03:03.871284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.855 [2024-05-15 10:03:04.013659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.855 [2024-05-15 10:03:04.197067] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.855 [2024-05-15 10:03:04.197447] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.855 [2024-05-15 10:03:04.197624] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.855 [2024-05-15 10:03:04.197679] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.855 [2024-05-15 10:03:04.197709] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.855 [2024-05-15 10:03:04.197829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.443 10:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:27.443 10:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:27.443 10:03:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:27.443 10:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:27.443 10:03:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.701 10:03:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.701 10:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.IQwxcjDtbv 00:22:27.701 10:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IQwxcjDtbv 00:22:27.701 10:03:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:27.959 [2024-05-15 10:03:05.099718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.959 10:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:28.217 10:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:28.217 [2024-05-15 10:03:05.599774] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:28.217 [2024-05-15 10:03:05.600268] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.217 [2024-05-15 10:03:05.600659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.475 10:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:28.734 malloc0 00:22:28.734 10:03:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:28.992 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IQwxcjDtbv 00:22:29.250 [2024-05-15 10:03:06.617242] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IQwxcjDtbv 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IQwxcjDtbv' 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83836 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83836 /var/tmp/bdevperf.sock 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83836 ']' 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:29.508 10:03:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.508 [2024-05-15 10:03:06.715591] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:29.508 [2024-05-15 10:03:06.716029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83836 ] 00:22:29.508 [2024-05-15 10:03:06.862260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.765 [2024-05-15 10:03:07.043359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.331 10:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:30.331 10:03:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:30.331 10:03:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IQwxcjDtbv 00:22:30.898 [2024-05-15 10:03:08.003172] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.898 [2024-05-15 10:03:08.004104] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:30.898 TLSTESTn1 00:22:30.898 10:03:08 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:30.898 Running I/O for 10 seconds... 00:22:43.098 00:22:43.098 Latency(us) 00:22:43.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.098 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:43.098 Verification LBA range: start 0x0 length 0x2000 00:22:43.098 TLSTESTn1 : 10.01 4887.35 19.09 0.00 0.00 26142.35 5180.46 20721.86 00:22:43.098 =================================================================================================================== 00:22:43.098 Total : 4887.35 19.09 0.00 0.00 26142.35 5180.46 20721.86 00:22:43.098 0 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83836 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83836 ']' 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83836 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83836 00:22:43.098 killing process with pid 83836 00:22:43.098 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.098 00:22:43.098 Latency(us) 00:22:43.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.098 =================================================================================================================== 00:22:43.098 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83836' 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83836 00:22:43.098 [2024-05-15 10:03:18.344457] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83836 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.IQwxcjDtbv 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IQwxcjDtbv 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IQwxcjDtbv 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IQwxcjDtbv 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IQwxcjDtbv' 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83989 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83989 /var/tmp/bdevperf.sock 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 83989 ']' 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:43.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:43.098 10:03:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.098 [2024-05-15 10:03:18.796775] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:43.098 [2024-05-15 10:03:18.797437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83989 ] 00:22:43.098 [2024-05-15 10:03:18.946920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.098 [2024-05-15 10:03:19.122239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.098 10:03:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:43.098 10:03:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:43.098 10:03:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IQwxcjDtbv 00:22:43.098 [2024-05-15 10:03:20.122126] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.098 [2024-05-15 10:03:20.123032] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:43.098 [2024-05-15 10:03:20.123297] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.IQwxcjDtbv 00:22:43.099 2024/05/15 10:03:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.IQwxcjDtbv subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:22:43.099 request: 00:22:43.099 { 00:22:43.099 "method": "bdev_nvme_attach_controller", 00:22:43.099 "params": { 00:22:43.099 "name": "TLSTEST", 00:22:43.099 "trtype": "tcp", 00:22:43.099 "traddr": "10.0.0.2", 00:22:43.099 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.099 "adrfam": "ipv4", 00:22:43.099 "trsvcid": "4420", 00:22:43.099 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.099 "psk": "/tmp/tmp.IQwxcjDtbv" 00:22:43.099 } 00:22:43.099 } 00:22:43.099 Got JSON-RPC error response 00:22:43.099 GoRPCClient: error on JSON-RPC call 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83989 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83989 ']' 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83989 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83989 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83989' 00:22:43.099 killing process with pid 83989 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83989 00:22:43.099 Received shutdown signal, test time was about 10.000000 seconds 00:22:43.099 00:22:43.099 Latency(us) 00:22:43.099 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.099 =================================================================================================================== 00:22:43.099 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:43.099 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83989 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 83728 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 83728 ']' 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 83728 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 83728 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 83728' 00:22:43.360 killing process with pid 83728 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 83728 00:22:43.360 [2024-05-15 10:03:20.573108] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:43.360 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 83728 00:22:43.360 [2024-05-15 10:03:20.573356] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84045 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84045 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84045 ']' 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.619 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:43.620 10:03:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.878 [2024-05-15 10:03:21.029051] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:43.878 [2024-05-15 10:03:21.029419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.878 [2024-05-15 10:03:21.174464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.136 [2024-05-15 10:03:21.331528] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.136 [2024-05-15 10:03:21.331804] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.136 [2024-05-15 10:03:21.331918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.136 [2024-05-15 10:03:21.331974] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.136 [2024-05-15 10:03:21.332057] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.136 [2024-05-15 10:03:21.332145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.IQwxcjDtbv 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IQwxcjDtbv 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.IQwxcjDtbv 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IQwxcjDtbv 00:22:44.704 10:03:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:44.963 [2024-05-15 10:03:22.292884] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.963 10:03:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:45.223 10:03:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:45.482 [2024-05-15 10:03:22.852973] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:45.482 [2024-05-15 10:03:22.853407] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:45.482 [2024-05-15 10:03:22.853759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.741 10:03:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:46.000 malloc0 00:22:46.000 10:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:46.264 10:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IQwxcjDtbv 00:22:46.535 [2024-05-15 10:03:23.685686] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:46.535 [2024-05-15 10:03:23.686013] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:46.535 [2024-05-15 10:03:23.686182] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:46.535 2024/05/15 10:03:23 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.IQwxcjDtbv], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:22:46.535 request: 00:22:46.535 { 00:22:46.535 "method": "nvmf_subsystem_add_host", 00:22:46.535 "params": { 00:22:46.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.535 "host": "nqn.2016-06.io.spdk:host1", 00:22:46.535 "psk": "/tmp/tmp.IQwxcjDtbv" 00:22:46.535 } 00:22:46.535 } 00:22:46.535 Got JSON-RPC error response 00:22:46.535 GoRPCClient: error on JSON-RPC call 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84045 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84045 ']' 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84045 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84045 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84045' 00:22:46.535 killing process with pid 84045 00:22:46.535 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84045 00:22:46.535 [2024-05-15 10:03:23.745412] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 10:03:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84045 00:22:46.535 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:46.793 10:03:24 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.IQwxcjDtbv 00:22:46.793 10:03:24 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:46.793 10:03:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:46.793 10:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:46.793 10:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:46.793 10:03:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84161 00:22:46.793 10:03:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84161 00:22:46.794 10:03:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.794 10:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84161 ']' 00:22:46.794 10:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.794 10:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:46.794 10:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.794 10:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:46.794 10:03:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.051 [2024-05-15 10:03:24.241919] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:47.051 [2024-05-15 10:03:24.243279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.051 [2024-05-15 10:03:24.393463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.310 [2024-05-15 10:03:24.566866] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.310 [2024-05-15 10:03:24.567278] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.310 [2024-05-15 10:03:24.567449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.310 [2024-05-15 10:03:24.567523] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.310 [2024-05-15 10:03:24.567606] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.310 [2024-05-15 10:03:24.567695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.IQwxcjDtbv 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IQwxcjDtbv 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:48.246 [2024-05-15 10:03:25.558635] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.246 10:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.505 10:03:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:48.791 [2024-05-15 10:03:26.078683] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:48.792 [2024-05-15 10:03:26.079072] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.792 [2024-05-15 10:03:26.079404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.792 10:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:49.049 malloc0 00:22:49.049 10:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:49.308 10:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IQwxcjDtbv 00:22:49.568 [2024-05-15 10:03:26.782824] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:49.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.568 10:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84265 00:22:49.568 10:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:49.569 10:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.569 10:03:26 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84265 /var/tmp/bdevperf.sock 00:22:49.569 10:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84265 ']' 00:22:49.569 10:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.569 10:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:49.569 10:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.569 10:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:49.569 10:03:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.569 [2024-05-15 10:03:26.866599] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:49.569 [2024-05-15 10:03:26.867018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84265 ] 00:22:49.827 [2024-05-15 10:03:27.009954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.827 [2024-05-15 10:03:27.169891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:50.763 10:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:50.763 10:03:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:50.763 10:03:27 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IQwxcjDtbv 00:22:51.051 [2024-05-15 10:03:28.149543] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.051 [2024-05-15 10:03:28.150592] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:51.051 TLSTESTn1 00:22:51.051 10:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:51.310 10:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:51.310 "subsystems": [ 00:22:51.310 { 00:22:51.310 "subsystem": "keyring", 00:22:51.310 "config": [] 00:22:51.310 }, 00:22:51.310 { 00:22:51.310 "subsystem": "iobuf", 00:22:51.310 "config": [ 00:22:51.310 { 00:22:51.310 "method": "iobuf_set_options", 00:22:51.310 "params": { 00:22:51.310 "large_bufsize": 135168, 00:22:51.310 "large_pool_count": 1024, 00:22:51.310 "small_bufsize": 8192, 00:22:51.310 "small_pool_count": 8192 00:22:51.310 } 00:22:51.310 } 00:22:51.310 ] 00:22:51.310 }, 00:22:51.310 { 00:22:51.310 "subsystem": "sock", 00:22:51.310 "config": [ 00:22:51.310 { 00:22:51.310 "method": "sock_impl_set_options", 00:22:51.310 "params": { 00:22:51.310 "enable_ktls": false, 00:22:51.310 "enable_placement_id": 0, 00:22:51.310 "enable_quickack": false, 00:22:51.310 "enable_recv_pipe": true, 00:22:51.310 "enable_zerocopy_send_client": false, 00:22:51.310 "enable_zerocopy_send_server": true, 00:22:51.310 "impl_name": "posix", 00:22:51.310 "recv_buf_size": 2097152, 00:22:51.310 "send_buf_size": 2097152, 00:22:51.310 "tls_version": 0, 00:22:51.310 "zerocopy_threshold": 0 00:22:51.310 } 00:22:51.310 }, 00:22:51.310 { 00:22:51.310 "method": "sock_impl_set_options", 00:22:51.310 "params": { 00:22:51.310 "enable_ktls": false, 00:22:51.310 "enable_placement_id": 0, 00:22:51.310 "enable_quickack": false, 00:22:51.310 "enable_recv_pipe": true, 00:22:51.310 "enable_zerocopy_send_client": false, 00:22:51.310 "enable_zerocopy_send_server": true, 00:22:51.310 "impl_name": "ssl", 00:22:51.310 "recv_buf_size": 4096, 00:22:51.310 "send_buf_size": 4096, 00:22:51.310 "tls_version": 0, 00:22:51.310 "zerocopy_threshold": 0 00:22:51.310 } 00:22:51.310 } 00:22:51.310 ] 00:22:51.310 }, 00:22:51.310 { 00:22:51.310 "subsystem": "vmd", 00:22:51.310 "config": [] 00:22:51.310 }, 00:22:51.310 { 00:22:51.310 "subsystem": "accel", 00:22:51.310 "config": [ 00:22:51.310 { 00:22:51.310 "method": "accel_set_options", 00:22:51.310 "params": { 00:22:51.310 "buf_count": 2048, 00:22:51.311 "large_cache_size": 16, 00:22:51.311 "sequence_count": 2048, 00:22:51.311 "small_cache_size": 128, 00:22:51.311 "task_count": 2048 00:22:51.311 } 00:22:51.311 } 00:22:51.311 ] 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "subsystem": "bdev", 00:22:51.311 "config": [ 00:22:51.311 { 00:22:51.311 "method": "bdev_set_options", 00:22:51.311 "params": { 00:22:51.311 "bdev_auto_examine": true, 00:22:51.311 "bdev_io_cache_size": 256, 00:22:51.311 "bdev_io_pool_size": 65535, 00:22:51.311 "iobuf_large_cache_size": 16, 00:22:51.311 "iobuf_small_cache_size": 128 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "bdev_raid_set_options", 00:22:51.311 "params": { 00:22:51.311 "process_window_size_kb": 1024 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "bdev_iscsi_set_options", 00:22:51.311 "params": { 00:22:51.311 "timeout_sec": 30 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "bdev_nvme_set_options", 00:22:51.311 "params": { 00:22:51.311 "action_on_timeout": "none", 00:22:51.311 "allow_accel_sequence": false, 00:22:51.311 "arbitration_burst": 0, 00:22:51.311 "bdev_retry_count": 3, 00:22:51.311 "ctrlr_loss_timeout_sec": 0, 00:22:51.311 "delay_cmd_submit": true, 00:22:51.311 "dhchap_dhgroups": [ 00:22:51.311 "null", 00:22:51.311 "ffdhe2048", 00:22:51.311 "ffdhe3072", 00:22:51.311 "ffdhe4096", 00:22:51.311 "ffdhe6144", 00:22:51.311 "ffdhe8192" 00:22:51.311 ], 00:22:51.311 "dhchap_digests": [ 00:22:51.311 "sha256", 00:22:51.311 "sha384", 00:22:51.311 "sha512" 00:22:51.311 ], 00:22:51.311 "disable_auto_failback": false, 00:22:51.311 "fast_io_fail_timeout_sec": 0, 00:22:51.311 "generate_uuids": false, 00:22:51.311 "high_priority_weight": 0, 00:22:51.311 "io_path_stat": false, 00:22:51.311 "io_queue_requests": 0, 00:22:51.311 "keep_alive_timeout_ms": 10000, 00:22:51.311 "low_priority_weight": 0, 00:22:51.311 "medium_priority_weight": 0, 00:22:51.311 "nvme_adminq_poll_period_us": 10000, 00:22:51.311 "nvme_error_stat": false, 00:22:51.311 "nvme_ioq_poll_period_us": 0, 00:22:51.311 "rdma_cm_event_timeout_ms": 0, 00:22:51.311 "rdma_max_cq_size": 0, 00:22:51.311 "rdma_srq_size": 0, 00:22:51.311 "reconnect_delay_sec": 0, 00:22:51.311 "timeout_admin_us": 0, 00:22:51.311 "timeout_us": 0, 00:22:51.311 "transport_ack_timeout": 0, 00:22:51.311 "transport_retry_count": 4, 00:22:51.311 "transport_tos": 0 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "bdev_nvme_set_hotplug", 00:22:51.311 "params": { 00:22:51.311 "enable": false, 00:22:51.311 "period_us": 100000 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "bdev_malloc_create", 00:22:51.311 "params": { 00:22:51.311 "block_size": 4096, 00:22:51.311 "name": "malloc0", 00:22:51.311 "num_blocks": 8192, 00:22:51.311 "optimal_io_boundary": 0, 00:22:51.311 "physical_block_size": 4096, 00:22:51.311 "uuid": "d9e7d21f-1f82-4af5-9b11-d84e24d55525" 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "bdev_wait_for_examine" 00:22:51.311 } 00:22:51.311 ] 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "subsystem": "nbd", 00:22:51.311 "config": [] 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "subsystem": "scheduler", 00:22:51.311 "config": [ 00:22:51.311 { 00:22:51.311 "method": "framework_set_scheduler", 00:22:51.311 "params": { 00:22:51.311 "name": "static" 00:22:51.311 } 00:22:51.311 } 00:22:51.311 ] 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "subsystem": "nvmf", 00:22:51.311 "config": [ 00:22:51.311 { 00:22:51.311 "method": "nvmf_set_config", 00:22:51.311 "params": { 00:22:51.311 "admin_cmd_passthru": { 00:22:51.311 "identify_ctrlr": false 00:22:51.311 }, 00:22:51.311 "discovery_filter": "match_any" 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "nvmf_set_max_subsystems", 00:22:51.311 "params": { 00:22:51.311 "max_subsystems": 1024 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "nvmf_set_crdt", 00:22:51.311 "params": { 00:22:51.311 "crdt1": 0, 00:22:51.311 "crdt2": 0, 00:22:51.311 "crdt3": 0 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "nvmf_create_transport", 00:22:51.311 "params": { 00:22:51.311 "abort_timeout_sec": 1, 00:22:51.311 "ack_timeout": 0, 00:22:51.311 "buf_cache_size": 4294967295, 00:22:51.311 "c2h_success": false, 00:22:51.311 "data_wr_pool_size": 0, 00:22:51.311 "dif_insert_or_strip": false, 00:22:51.311 "in_capsule_data_size": 4096, 00:22:51.311 "io_unit_size": 131072, 00:22:51.311 "max_aq_depth": 128, 00:22:51.311 "max_io_qpairs_per_ctrlr": 127, 00:22:51.311 "max_io_size": 131072, 00:22:51.311 "max_queue_depth": 128, 00:22:51.311 "num_shared_buffers": 511, 00:22:51.311 "sock_priority": 0, 00:22:51.311 "trtype": "TCP", 00:22:51.311 "zcopy": false 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "nvmf_create_subsystem", 00:22:51.311 "params": { 00:22:51.311 "allow_any_host": false, 00:22:51.311 "ana_reporting": false, 00:22:51.311 "max_cntlid": 65519, 00:22:51.311 "max_namespaces": 10, 00:22:51.311 "min_cntlid": 1, 00:22:51.311 "model_number": "SPDK bdev Controller", 00:22:51.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.311 "serial_number": "SPDK00000000000001" 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "nvmf_subsystem_add_host", 00:22:51.311 "params": { 00:22:51.311 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.311 "psk": "/tmp/tmp.IQwxcjDtbv" 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "nvmf_subsystem_add_ns", 00:22:51.311 "params": { 00:22:51.311 "namespace": { 00:22:51.311 "bdev_name": "malloc0", 00:22:51.311 "nguid": "D9E7D21F1F824AF59B11D84E24D55525", 00:22:51.311 "no_auto_visible": false, 00:22:51.311 "nsid": 1, 00:22:51.311 "uuid": "d9e7d21f-1f82-4af5-9b11-d84e24d55525" 00:22:51.311 }, 00:22:51.311 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:51.311 } 00:22:51.311 }, 00:22:51.311 { 00:22:51.311 "method": "nvmf_subsystem_add_listener", 00:22:51.311 "params": { 00:22:51.311 "listen_address": { 00:22:51.311 "adrfam": "IPv4", 00:22:51.311 "traddr": "10.0.0.2", 00:22:51.311 "trsvcid": "4420", 00:22:51.311 "trtype": "TCP" 00:22:51.311 }, 00:22:51.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.311 "secure_channel": true 00:22:51.311 } 00:22:51.311 } 00:22:51.311 ] 00:22:51.311 } 00:22:51.311 ] 00:22:51.311 }' 00:22:51.311 10:03:28 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:51.878 10:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:51.878 "subsystems": [ 00:22:51.878 { 00:22:51.878 "subsystem": "keyring", 00:22:51.878 "config": [] 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "subsystem": "iobuf", 00:22:51.878 "config": [ 00:22:51.878 { 00:22:51.878 "method": "iobuf_set_options", 00:22:51.878 "params": { 00:22:51.878 "large_bufsize": 135168, 00:22:51.878 "large_pool_count": 1024, 00:22:51.878 "small_bufsize": 8192, 00:22:51.878 "small_pool_count": 8192 00:22:51.878 } 00:22:51.878 } 00:22:51.878 ] 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "subsystem": "sock", 00:22:51.878 "config": [ 00:22:51.878 { 00:22:51.878 "method": "sock_impl_set_options", 00:22:51.878 "params": { 00:22:51.878 "enable_ktls": false, 00:22:51.878 "enable_placement_id": 0, 00:22:51.878 "enable_quickack": false, 00:22:51.878 "enable_recv_pipe": true, 00:22:51.878 "enable_zerocopy_send_client": false, 00:22:51.878 "enable_zerocopy_send_server": true, 00:22:51.878 "impl_name": "posix", 00:22:51.878 "recv_buf_size": 2097152, 00:22:51.878 "send_buf_size": 2097152, 00:22:51.878 "tls_version": 0, 00:22:51.878 "zerocopy_threshold": 0 00:22:51.878 } 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "method": "sock_impl_set_options", 00:22:51.878 "params": { 00:22:51.878 "enable_ktls": false, 00:22:51.878 "enable_placement_id": 0, 00:22:51.878 "enable_quickack": false, 00:22:51.878 "enable_recv_pipe": true, 00:22:51.878 "enable_zerocopy_send_client": false, 00:22:51.878 "enable_zerocopy_send_server": true, 00:22:51.878 "impl_name": "ssl", 00:22:51.878 "recv_buf_size": 4096, 00:22:51.878 "send_buf_size": 4096, 00:22:51.878 "tls_version": 0, 00:22:51.878 "zerocopy_threshold": 0 00:22:51.878 } 00:22:51.878 } 00:22:51.878 ] 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "subsystem": "vmd", 00:22:51.878 "config": [] 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "subsystem": "accel", 00:22:51.878 "config": [ 00:22:51.878 { 00:22:51.878 "method": "accel_set_options", 00:22:51.878 "params": { 00:22:51.878 "buf_count": 2048, 00:22:51.878 "large_cache_size": 16, 00:22:51.878 "sequence_count": 2048, 00:22:51.878 "small_cache_size": 128, 00:22:51.878 "task_count": 2048 00:22:51.878 } 00:22:51.878 } 00:22:51.878 ] 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "subsystem": "bdev", 00:22:51.878 "config": [ 00:22:51.878 { 00:22:51.878 "method": "bdev_set_options", 00:22:51.878 "params": { 00:22:51.878 "bdev_auto_examine": true, 00:22:51.878 "bdev_io_cache_size": 256, 00:22:51.878 "bdev_io_pool_size": 65535, 00:22:51.878 "iobuf_large_cache_size": 16, 00:22:51.878 "iobuf_small_cache_size": 128 00:22:51.878 } 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "method": "bdev_raid_set_options", 00:22:51.878 "params": { 00:22:51.878 "process_window_size_kb": 1024 00:22:51.878 } 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "method": "bdev_iscsi_set_options", 00:22:51.878 "params": { 00:22:51.878 "timeout_sec": 30 00:22:51.878 } 00:22:51.878 }, 00:22:51.878 { 00:22:51.878 "method": "bdev_nvme_set_options", 00:22:51.878 "params": { 00:22:51.878 "action_on_timeout": "none", 00:22:51.878 "allow_accel_sequence": false, 00:22:51.879 "arbitration_burst": 0, 00:22:51.879 "bdev_retry_count": 3, 00:22:51.879 "ctrlr_loss_timeout_sec": 0, 00:22:51.879 "delay_cmd_submit": true, 00:22:51.879 "dhchap_dhgroups": [ 00:22:51.879 "null", 00:22:51.879 "ffdhe2048", 00:22:51.879 "ffdhe3072", 00:22:51.879 "ffdhe4096", 00:22:51.879 "ffdhe6144", 00:22:51.879 "ffdhe8192" 00:22:51.879 ], 00:22:51.879 "dhchap_digests": [ 00:22:51.879 "sha256", 00:22:51.879 "sha384", 00:22:51.879 "sha512" 00:22:51.879 ], 00:22:51.879 "disable_auto_failback": false, 00:22:51.879 "fast_io_fail_timeout_sec": 0, 00:22:51.879 "generate_uuids": false, 00:22:51.879 "high_priority_weight": 0, 00:22:51.879 "io_path_stat": false, 00:22:51.879 "io_queue_requests": 512, 00:22:51.879 "keep_alive_timeout_ms": 10000, 00:22:51.879 "low_priority_weight": 0, 00:22:51.879 "medium_priority_weight": 0, 00:22:51.879 "nvme_adminq_poll_period_us": 10000, 00:22:51.879 "nvme_error_stat": false, 00:22:51.879 "nvme_ioq_poll_period_us": 0, 00:22:51.879 "rdma_cm_event_timeout_ms": 0, 00:22:51.879 "rdma_max_cq_size": 0, 00:22:51.879 "rdma_srq_size": 0, 00:22:51.879 "reconnect_delay_sec": 0, 00:22:51.879 "timeout_admin_us": 0, 00:22:51.879 "timeout_us": 0, 00:22:51.879 "transport_ack_timeout": 0, 00:22:51.879 "transport_retry_count": 4, 00:22:51.879 "transport_tos": 0 00:22:51.879 } 00:22:51.879 }, 00:22:51.879 { 00:22:51.879 "method": "bdev_nvme_attach_controller", 00:22:51.879 "params": { 00:22:51.879 "adrfam": "IPv4", 00:22:51.879 "ctrlr_loss_timeout_sec": 0, 00:22:51.879 "ddgst": false, 00:22:51.879 "fast_io_fail_timeout_sec": 0, 00:22:51.879 "hdgst": false, 00:22:51.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:51.879 "name": "TLSTEST", 00:22:51.879 "prchk_guard": false, 00:22:51.879 "prchk_reftag": false, 00:22:51.879 "psk": "/tmp/tmp.IQwxcjDtbv", 00:22:51.879 "reconnect_delay_sec": 0, 00:22:51.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.879 "traddr": "10.0.0.2", 00:22:51.879 "trsvcid": "4420", 00:22:51.879 "trtype": "TCP" 00:22:51.879 } 00:22:51.879 }, 00:22:51.879 { 00:22:51.879 "method": "bdev_nvme_set_hotplug", 00:22:51.879 "params": { 00:22:51.879 "enable": false, 00:22:51.879 "period_us": 100000 00:22:51.879 } 00:22:51.879 }, 00:22:51.879 { 00:22:51.879 "method": "bdev_wait_for_examine" 00:22:51.879 } 00:22:51.879 ] 00:22:51.879 }, 00:22:51.879 { 00:22:51.879 "subsystem": "nbd", 00:22:51.879 "config": [] 00:22:51.879 } 00:22:51.879 ] 00:22:51.879 }' 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84265 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84265 ']' 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84265 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84265 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84265' 00:22:51.879 killing process with pid 84265 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84265 00:22:51.879 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.879 00:22:51.879 Latency(us) 00:22:51.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.879 =================================================================================================================== 00:22:51.879 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:51.879 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84265 00:22:51.879 [2024-05-15 10:03:29.041026] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84161 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84161 ']' 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84161 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84161 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84161' 00:22:52.138 killing process with pid 84161 00:22:52.138 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84161 00:22:52.138 [2024-05-15 10:03:29.449032] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84161 00:22:52.138 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:52.138 [2024-05-15 10:03:29.449340] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:52.707 10:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:52.707 10:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:52.707 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:52.707 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.707 10:03:29 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:52.707 "subsystems": [ 00:22:52.707 { 00:22:52.707 "subsystem": "keyring", 00:22:52.707 "config": [] 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "subsystem": "iobuf", 00:22:52.707 "config": [ 00:22:52.707 { 00:22:52.707 "method": "iobuf_set_options", 00:22:52.707 "params": { 00:22:52.707 "large_bufsize": 135168, 00:22:52.707 "large_pool_count": 1024, 00:22:52.707 "small_bufsize": 8192, 00:22:52.707 "small_pool_count": 8192 00:22:52.707 } 00:22:52.707 } 00:22:52.707 ] 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "subsystem": "sock", 00:22:52.707 "config": [ 00:22:52.707 { 00:22:52.707 "method": "sock_impl_set_options", 00:22:52.707 "params": { 00:22:52.707 "enable_ktls": false, 00:22:52.707 "enable_placement_id": 0, 00:22:52.707 "enable_quickack": false, 00:22:52.707 "enable_recv_pipe": true, 00:22:52.707 "enable_zerocopy_send_client": false, 00:22:52.707 "enable_zerocopy_send_server": true, 00:22:52.707 "impl_name": "posix", 00:22:52.707 "recv_buf_size": 2097152, 00:22:52.707 "send_buf_size": 2097152, 00:22:52.707 "tls_version": 0, 00:22:52.707 "zerocopy_threshold": 0 00:22:52.707 } 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "method": "sock_impl_set_options", 00:22:52.707 "params": { 00:22:52.707 "enable_ktls": false, 00:22:52.707 "enable_placement_id": 0, 00:22:52.707 "enable_quickack": false, 00:22:52.707 "enable_recv_pipe": true, 00:22:52.707 "enable_zerocopy_send_client": false, 00:22:52.707 "enable_zerocopy_send_server": true, 00:22:52.707 "impl_name": "ssl", 00:22:52.707 "recv_buf_size": 4096, 00:22:52.707 "send_buf_size": 4096, 00:22:52.707 "tls_version": 0, 00:22:52.707 "zerocopy_threshold": 0 00:22:52.707 } 00:22:52.707 } 00:22:52.707 ] 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "subsystem": "vmd", 00:22:52.707 "config": [] 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "subsystem": "accel", 00:22:52.707 "config": [ 00:22:52.707 { 00:22:52.707 "method": "accel_set_options", 00:22:52.707 "params": { 00:22:52.707 "buf_count": 2048, 00:22:52.707 "large_cache_size": 16, 00:22:52.707 "sequence_count": 2048, 00:22:52.707 "small_cache_size": 128, 00:22:52.707 "task_count": 2048 00:22:52.707 } 00:22:52.707 } 00:22:52.707 ] 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "subsystem": "bdev", 00:22:52.707 "config": [ 00:22:52.707 { 00:22:52.707 "method": "bdev_set_options", 00:22:52.707 "params": { 00:22:52.707 "bdev_auto_examine": true, 00:22:52.707 "bdev_io_cache_size": 256, 00:22:52.707 "bdev_io_pool_size": 65535, 00:22:52.707 "iobuf_large_cache_size": 16, 00:22:52.707 "iobuf_small_cache_size": 128 00:22:52.707 } 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "method": "bdev_raid_set_options", 00:22:52.707 "params": { 00:22:52.707 "process_window_size_kb": 1024 00:22:52.707 } 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "method": "bdev_iscsi_set_options", 00:22:52.707 "params": { 00:22:52.707 "timeout_sec": 30 00:22:52.707 } 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "method": "bdev_nvme_set_options", 00:22:52.707 "params": { 00:22:52.707 "action_on_timeout": "none", 00:22:52.707 "allow_accel_sequence": false, 00:22:52.707 "arbitration_burst": 0, 00:22:52.707 "bdev_retry_count": 3, 00:22:52.707 "ctrlr_loss_timeout_sec": 0, 00:22:52.707 "delay_cmd_submit": true, 00:22:52.707 "dhchap_dhgroups": [ 00:22:52.707 "null", 00:22:52.707 "ffdhe2048", 00:22:52.707 "ffdhe3072", 00:22:52.707 "ffdhe4096", 00:22:52.707 "ffdhe6144", 00:22:52.707 "ffdhe8192" 00:22:52.707 ], 00:22:52.707 "dhchap_digests": [ 00:22:52.707 "sha256", 00:22:52.707 "sha384", 00:22:52.707 "sha512" 00:22:52.707 ], 00:22:52.707 "disable_auto_failback": false, 00:22:52.707 "fast_io_fail_timeout_sec": 0, 00:22:52.707 "generate_uuids": false, 00:22:52.707 "high_priority_weight": 0, 00:22:52.707 "io_path_stat": false, 00:22:52.707 "io_queue_requests": 0, 00:22:52.707 "keep_alive_timeout_ms": 10000, 00:22:52.707 "low_priority_weight": 0, 00:22:52.707 "medium_priority_weight": 0, 00:22:52.707 "nvme_adminq_poll_period_us": 10000, 00:22:52.707 "nvme_error_stat": false, 00:22:52.707 "nvme_ioq_poll_period_us": 0, 00:22:52.707 "rdma_cm_event_timeout_ms": 0, 00:22:52.707 "rdma_max_cq_size": 0, 00:22:52.707 "rdma_srq_size": 0, 00:22:52.707 "reconnect_delay_sec": 0, 00:22:52.707 "timeout_admin_us": 0, 00:22:52.707 "timeout_us": 0, 00:22:52.707 "transport_ack_timeout": 0, 00:22:52.707 "transport_retry_count": 4, 00:22:52.707 "transport_tos": 0 00:22:52.707 } 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "method": "bdev_nvme_set_hotplug", 00:22:52.707 "params": { 00:22:52.707 "enable": false, 00:22:52.707 "period_us": 100000 00:22:52.707 } 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "method": "bdev_malloc_create", 00:22:52.707 "params": { 00:22:52.707 "block_size": 4096, 00:22:52.707 "name": "malloc0", 00:22:52.707 "num_blocks": 8192, 00:22:52.707 "optimal_io_boundary": 0, 00:22:52.707 "physical_block_size": 4096, 00:22:52.707 "uuid": "d9e7d21f-1f82-4af5-9b11-d84e24d55525" 00:22:52.707 } 00:22:52.707 }, 00:22:52.707 { 00:22:52.707 "method": "bdev_wait_for_examine" 00:22:52.707 } 00:22:52.707 ] 00:22:52.707 }, 00:22:52.707 { 00:22:52.708 "subsystem": "nbd", 00:22:52.708 "config": [] 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "subsystem": "scheduler", 00:22:52.708 "config": [ 00:22:52.708 { 00:22:52.708 "method": "framework_set_scheduler", 00:22:52.708 "params": { 00:22:52.708 "name": "static" 00:22:52.708 } 00:22:52.708 } 00:22:52.708 ] 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "subsystem": "nvmf", 00:22:52.708 "config": [ 00:22:52.708 { 00:22:52.708 "method": "nvmf_set_config", 00:22:52.708 "params": { 00:22:52.708 "admin_cmd_passthru": { 00:22:52.708 "identify_ctrlr": false 00:22:52.708 }, 00:22:52.708 "discovery_filter": "match_any" 00:22:52.708 } 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "method": "nvmf_set_max_subsystems", 00:22:52.708 "params": { 00:22:52.708 "max_subsystems": 1024 00:22:52.708 } 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "method": "nvmf_set_crdt", 00:22:52.708 "params": { 00:22:52.708 "crdt1": 0, 00:22:52.708 "crdt2": 0, 00:22:52.708 "crdt3": 0 00:22:52.708 } 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "method": "nvmf_create_transport", 00:22:52.708 "params": { 00:22:52.708 "abort_timeout_sec": 1, 00:22:52.708 "ack_timeout": 0, 00:22:52.708 "buf_cache_size": 4294967295, 00:22:52.708 "c2h_success": false, 00:22:52.708 "data_wr_pool_size": 0, 00:22:52.708 "dif_insert_or_strip": false, 00:22:52.708 "in_capsule_data_size": 4096, 00:22:52.708 "io_unit_size": 131072, 00:22:52.708 "max_aq_depth": 128, 00:22:52.708 "max_io_qpairs_per_ctrlr": 127, 00:22:52.708 "max_io_size": 131072, 00:22:52.708 "max_queue_depth": 128, 00:22:52.708 "num_shared_buffers": 511, 00:22:52.708 "sock_priority": 0, 00:22:52.708 "trtype": "TCP", 00:22:52.708 "zcopy": false 00:22:52.708 } 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "method": "nvmf_create_subsystem", 00:22:52.708 "params": { 00:22:52.708 "allow_any_host": false, 00:22:52.708 "ana_reporting": false, 00:22:52.708 "max_cntlid": 65519, 00:22:52.708 "max_namespaces": 10, 00:22:52.708 "min_cntlid": 1, 00:22:52.708 "model_number": "SPDK bdev Controller", 00:22:52.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.708 "serial_number": "SPDK00000000000001" 00:22:52.708 } 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "method": "nvmf_subsystem_add_host", 00:22:52.708 "params": { 00:22:52.708 "host": "nqn.2016-06.io.spdk:host1", 00:22:52.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.708 "psk": "/tmp/tmp.IQwxcjDtbv" 00:22:52.708 } 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "method": "nvmf_subsystem_add_ns", 00:22:52.708 "params": { 00:22:52.708 "namespace": { 00:22:52.708 "bdev_name": "malloc0", 00:22:52.708 "nguid": "D9E7D21F1F824AF59B11D84E24D55525", 00:22:52.708 "no_auto_visible": false, 00:22:52.708 "nsid": 1, 00:22:52.708 "uuid": "d9e7d21f-1f82-4af5-9b11-d84e24d55525" 00:22:52.708 }, 00:22:52.708 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:52.708 } 00:22:52.708 }, 00:22:52.708 { 00:22:52.708 "method": "nvmf_subsystem_add_listener", 00:22:52.708 "params": { 00:22:52.708 "listen_address": { 00:22:52.708 "adrfam": "IPv4", 00:22:52.708 "traddr": "10.0.0.2", 00:22:52.708 "trsvcid": "4420", 00:22:52.708 "trtype": "TCP" 00:22:52.708 }, 00:22:52.708 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.708 "secure_channel": true 00:22:52.708 } 00:22:52.708 } 00:22:52.708 ] 00:22:52.708 } 00:22:52.708 ] 00:22:52.708 }' 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84344 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84344 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84344 ']' 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:52.708 10:03:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.708 [2024-05-15 10:03:29.931016] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:52.708 [2024-05-15 10:03:29.931608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.708 [2024-05-15 10:03:30.087512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.996 [2024-05-15 10:03:30.264387] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.996 [2024-05-15 10:03:30.264739] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.996 [2024-05-15 10:03:30.264887] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.996 [2024-05-15 10:03:30.264961] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.996 [2024-05-15 10:03:30.265062] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.996 [2024-05-15 10:03:30.265282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.254 [2024-05-15 10:03:30.538279] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.254 [2024-05-15 10:03:30.554242] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.254 [2024-05-15 10:03:30.570201] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:53.254 [2024-05-15 10:03:30.570609] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.254 [2024-05-15 10:03:30.570917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84388 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84388 /var/tmp/bdevperf.sock 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84388 ']' 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:53.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:53.820 10:03:30 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:53.820 "subsystems": [ 00:22:53.820 { 00:22:53.820 "subsystem": "keyring", 00:22:53.820 "config": [] 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "subsystem": "iobuf", 00:22:53.820 "config": [ 00:22:53.820 { 00:22:53.820 "method": "iobuf_set_options", 00:22:53.820 "params": { 00:22:53.820 "large_bufsize": 135168, 00:22:53.820 "large_pool_count": 1024, 00:22:53.820 "small_bufsize": 8192, 00:22:53.820 "small_pool_count": 8192 00:22:53.820 } 00:22:53.820 } 00:22:53.820 ] 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "subsystem": "sock", 00:22:53.820 "config": [ 00:22:53.820 { 00:22:53.820 "method": "sock_impl_set_options", 00:22:53.820 "params": { 00:22:53.820 "enable_ktls": false, 00:22:53.820 "enable_placement_id": 0, 00:22:53.820 "enable_quickack": false, 00:22:53.820 "enable_recv_pipe": true, 00:22:53.820 "enable_zerocopy_send_client": false, 00:22:53.820 "enable_zerocopy_send_server": true, 00:22:53.820 "impl_name": "posix", 00:22:53.820 "recv_buf_size": 2097152, 00:22:53.820 "send_buf_size": 2097152, 00:22:53.820 "tls_version": 0, 00:22:53.820 "zerocopy_threshold": 0 00:22:53.820 } 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "method": "sock_impl_set_options", 00:22:53.820 "params": { 00:22:53.820 "enable_ktls": false, 00:22:53.820 "enable_placement_id": 0, 00:22:53.820 "enable_quickack": false, 00:22:53.820 "enable_recv_pipe": true, 00:22:53.820 "enable_zerocopy_send_client": false, 00:22:53.820 "enable_zerocopy_send_server": true, 00:22:53.820 "impl_name": "ssl", 00:22:53.820 "recv_buf_size": 4096, 00:22:53.820 "send_buf_size": 4096, 00:22:53.820 "tls_version": 0, 00:22:53.820 "zerocopy_threshold": 0 00:22:53.820 } 00:22:53.820 } 00:22:53.820 ] 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "subsystem": "vmd", 00:22:53.820 "config": [] 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "subsystem": "accel", 00:22:53.820 "config": [ 00:22:53.820 { 00:22:53.820 "method": "accel_set_options", 00:22:53.820 "params": { 00:22:53.820 "buf_count": 2048, 00:22:53.820 "large_cache_size": 16, 00:22:53.820 "sequence_count": 2048, 00:22:53.820 "small_cache_size": 128, 00:22:53.820 "task_count": 2048 00:22:53.820 } 00:22:53.820 } 00:22:53.820 ] 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "subsystem": "bdev", 00:22:53.820 "config": [ 00:22:53.820 { 00:22:53.820 "method": "bdev_set_options", 00:22:53.820 "params": { 00:22:53.820 "bdev_auto_examine": true, 00:22:53.820 "bdev_io_cache_size": 256, 00:22:53.820 "bdev_io_pool_size": 65535, 00:22:53.820 "iobuf_large_cache_size": 16, 00:22:53.820 "iobuf_small_cache_size": 128 00:22:53.820 } 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "method": "bdev_raid_set_options", 00:22:53.820 "params": { 00:22:53.820 "process_window_size_kb": 1024 00:22:53.820 } 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "method": "bdev_iscsi_set_options", 00:22:53.820 "params": { 00:22:53.820 "timeout_sec": 30 00:22:53.820 } 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "method": "bdev_nvme_set_options", 00:22:53.820 "params": { 00:22:53.820 "action_on_timeout": "none", 00:22:53.820 "allow_accel_sequence": false, 00:22:53.820 "arbitration_burst": 0, 00:22:53.820 "bdev_retry_count": 3, 00:22:53.820 "ctrlr_loss_timeout_sec": 0, 00:22:53.820 "delay_cmd_submit": true, 00:22:53.820 "dhchap_dhgroups": [ 00:22:53.820 "null", 00:22:53.820 "ffdhe2048", 00:22:53.820 "ffdhe3072", 00:22:53.820 "ffdhe4096", 00:22:53.820 "ffdhe6144", 00:22:53.820 "ffdhe8192" 00:22:53.820 ], 00:22:53.820 "dhchap_digests": [ 00:22:53.820 "sha256", 00:22:53.820 "sha384", 00:22:53.820 "sha512" 00:22:53.820 ], 00:22:53.820 "disable_auto_failback": false, 00:22:53.820 "fast_io_fail_timeout_sec": 0, 00:22:53.820 "generate_uuids": false, 00:22:53.820 "high_priority_weight": 0, 00:22:53.820 "io_path_stat": false, 00:22:53.820 "io_queue_requests": 512, 00:22:53.820 "keep_alive_timeout_ms": 10000, 00:22:53.820 "low_priority_weight": 0, 00:22:53.820 "medium_priority_weight": 0, 00:22:53.820 "nvme_adminq_poll_period_us": 10000, 00:22:53.820 "nvme_error_stat": false, 00:22:53.820 "nvme_ioq_poll_period_us": 0, 00:22:53.820 "rdma_cm_event_timeout_ms": 0, 00:22:53.820 "rdma_max_cq_size": 0, 00:22:53.820 "rdma_srq_size": 0, 00:22:53.820 "reconnect_delay_sec": 0, 00:22:53.820 "timeout_admin_us": 0, 00:22:53.820 "timeout_us": 0, 00:22:53.820 "transport_ack_timeout": 0, 00:22:53.820 "transport_retry_count": 4, 00:22:53.820 "transport_tos": 0 00:22:53.820 } 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "method": "bdev_nvme_attach_controller", 00:22:53.820 "params": { 00:22:53.820 "adrfam": "IPv4", 00:22:53.820 "ctrlr_loss_timeout_sec": 0, 00:22:53.820 "ddgst": false, 00:22:53.820 "fast_io_fail_timeout_sec": 0, 00:22:53.820 "hdgst": false, 00:22:53.820 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.820 "name": "TLSTEST", 00:22:53.820 "prchk_guard": false, 00:22:53.820 "prchk_reftag": false, 00:22:53.820 "psk": "/tmp/tmp.IQwxcjDtbv", 00:22:53.820 "reconnect_delay_sec": 0, 00:22:53.820 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.820 "traddr": "10.0.0.2", 00:22:53.820 "trsvcid": "4420", 00:22:53.820 "trtype": "TCP" 00:22:53.820 } 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "method": "bdev_nvme_set_hotplug", 00:22:53.820 "params": { 00:22:53.820 "enable": false, 00:22:53.820 "period_us": 100000 00:22:53.820 } 00:22:53.820 }, 00:22:53.820 { 00:22:53.820 "method": "bdev_wait_for_examine" 00:22:53.820 } 00:22:53.820 ] 00:22:53.820 }, 00:22:53.820 { 00:22:53.821 "subsystem": "nbd", 00:22:53.821 "config": [] 00:22:53.821 } 00:22:53.821 ] 00:22:53.821 }' 00:22:53.821 10:03:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.821 [2024-05-15 10:03:31.020692] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:22:53.821 [2024-05-15 10:03:31.021055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84388 ] 00:22:53.821 [2024-05-15 10:03:31.164148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.079 [2024-05-15 10:03:31.327470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.338 [2024-05-15 10:03:31.531655] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.338 [2024-05-15 10:03:31.532339] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:54.904 10:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:54.905 10:03:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:54.905 10:03:32 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:54.905 Running I/O for 10 seconds... 00:23:04.952 00:23:04.952 Latency(us) 00:23:04.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.952 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:04.952 Verification LBA range: start 0x0 length 0x2000 00:23:04.952 TLSTESTn1 : 10.01 4487.70 17.53 0.00 0.00 28476.60 4150.61 26588.89 00:23:04.952 =================================================================================================================== 00:23:04.952 Total : 4487.70 17.53 0.00 0.00 28476.60 4150.61 26588.89 00:23:04.952 0 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84388 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84388 ']' 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84388 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84388 00:23:04.952 killing process with pid 84388 00:23:04.952 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.952 00:23:04.952 Latency(us) 00:23:04.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.952 =================================================================================================================== 00:23:04.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84388' 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84388 00:23:04.952 [2024-05-15 10:03:42.240297] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:04.952 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84388 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84344 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84344 ']' 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84344 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84344 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84344' 00:23:05.519 killing process with pid 84344 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84344 00:23:05.519 [2024-05-15 10:03:42.661827] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:05.519 10:03:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84344 00:23:05.519 [2024-05-15 10:03:42.662051] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:05.777 10:03:43 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:05.777 10:03:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.777 10:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:05.777 10:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.777 10:03:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84542 00:23:05.778 10:03:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84542 00:23:05.778 10:03:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:05.778 10:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84542 ']' 00:23:05.778 10:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.778 10:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:05.778 10:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.778 10:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:05.778 10:03:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:05.778 [2024-05-15 10:03:43.128493] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:05.778 [2024-05-15 10:03:43.128878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.034 [2024-05-15 10:03:43.289664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.291 [2024-05-15 10:03:43.454302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.291 [2024-05-15 10:03:43.454657] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.291 [2024-05-15 10:03:43.454788] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.291 [2024-05-15 10:03:43.454907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.291 [2024-05-15 10:03:43.454946] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.291 [2024-05-15 10:03:43.455081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.855 10:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:06.856 10:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:06.856 10:03:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.856 10:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:06.856 10:03:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.856 10:03:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.856 10:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.IQwxcjDtbv 00:23:06.856 10:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IQwxcjDtbv 00:23:06.856 10:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.113 [2024-05-15 10:03:44.406329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.113 10:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:07.371 10:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:07.628 [2024-05-15 10:03:44.938399] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:07.628 [2024-05-15 10:03:44.938820] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.628 [2024-05-15 10:03:44.939189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.628 10:03:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:07.885 malloc0 00:23:08.143 10:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:08.401 10:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IQwxcjDtbv 00:23:08.401 [2024-05-15 10:03:45.767110] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:08.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84644 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84644 /var/tmp/bdevperf.sock 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84644 ']' 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:08.658 10:03:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.658 [2024-05-15 10:03:45.862501] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:08.658 [2024-05-15 10:03:45.863023] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84644 ] 00:23:08.658 [2024-05-15 10:03:46.012129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.916 [2024-05-15 10:03:46.171776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.481 10:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:09.481 10:03:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:09.481 10:03:46 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IQwxcjDtbv 00:23:09.738 10:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:10.305 [2024-05-15 10:03:47.420524] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:10.305 nvme0n1 00:23:10.305 10:03:47 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.305 Running I/O for 1 seconds... 00:23:11.680 00:23:11.680 Latency(us) 00:23:11.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.680 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:11.680 Verification LBA range: start 0x0 length 0x2000 00:23:11.680 nvme0n1 : 1.01 4729.55 18.47 0.00 0.00 26822.63 4868.39 20721.86 00:23:11.680 =================================================================================================================== 00:23:11.680 Total : 4729.55 18.47 0.00 0.00 26822.63 4868.39 20721.86 00:23:11.680 0 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 84644 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84644 ']' 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84644 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84644 00:23:11.680 killing process with pid 84644 00:23:11.680 Received shutdown signal, test time was about 1.000000 seconds 00:23:11.680 00:23:11.680 Latency(us) 00:23:11.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.680 =================================================================================================================== 00:23:11.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84644' 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84644 00:23:11.680 10:03:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84644 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84542 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84542 ']' 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84542 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84542 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84542' 00:23:11.939 killing process with pid 84542 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84542 00:23:11.939 [2024-05-15 10:03:49.098232] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:11.939 [2024-05-15 10:03:49.098486] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:11.939 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84542 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84725 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84725 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84725 ']' 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:12.198 10:03:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.198 [2024-05-15 10:03:49.579857] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:12.198 [2024-05-15 10:03:49.580380] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.456 [2024-05-15 10:03:49.730081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.714 [2024-05-15 10:03:49.888708] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.714 [2024-05-15 10:03:49.889303] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.714 [2024-05-15 10:03:49.889513] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.714 [2024-05-15 10:03:49.889764] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.714 [2024-05-15 10:03:49.889960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.714 [2024-05-15 10:03:49.890165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.281 [2024-05-15 10:03:50.576892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.281 malloc0 00:23:13.281 [2024-05-15 10:03:50.620925] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:13.281 [2024-05-15 10:03:50.622198] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.281 [2024-05-15 10:03:50.622829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=84775 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 84775 /var/tmp/bdevperf.sock 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84775 ']' 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:13.281 10:03:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.540 [2024-05-15 10:03:50.699167] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:13.540 [2024-05-15 10:03:50.699531] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84775 ] 00:23:13.540 [2024-05-15 10:03:50.841085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.799 [2024-05-15 10:03:51.010165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.365 10:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:14.365 10:03:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:14.365 10:03:51 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IQwxcjDtbv 00:23:14.624 10:03:51 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:14.883 [2024-05-15 10:03:52.121572] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:14.883 nvme0n1 00:23:14.883 10:03:52 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:15.141 Running I/O for 1 seconds... 00:23:16.076 00:23:16.076 Latency(us) 00:23:16.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.076 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:16.076 Verification LBA range: start 0x0 length 0x2000 00:23:16.076 nvme0n1 : 1.01 5496.52 21.47 0.00 0.00 23108.27 4525.10 17476.27 00:23:16.076 =================================================================================================================== 00:23:16.076 Total : 5496.52 21.47 0.00 0.00 23108.27 4525.10 17476.27 00:23:16.076 0 00:23:16.076 10:03:53 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:16.076 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:16.076 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.335 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:16.335 10:03:53 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:16.335 "subsystems": [ 00:23:16.335 { 00:23:16.335 "subsystem": "keyring", 00:23:16.335 "config": [ 00:23:16.335 { 00:23:16.335 "method": "keyring_file_add_key", 00:23:16.335 "params": { 00:23:16.335 "name": "key0", 00:23:16.335 "path": "/tmp/tmp.IQwxcjDtbv" 00:23:16.335 } 00:23:16.335 } 00:23:16.335 ] 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "subsystem": "iobuf", 00:23:16.335 "config": [ 00:23:16.335 { 00:23:16.335 "method": "iobuf_set_options", 00:23:16.335 "params": { 00:23:16.335 "large_bufsize": 135168, 00:23:16.335 "large_pool_count": 1024, 00:23:16.335 "small_bufsize": 8192, 00:23:16.335 "small_pool_count": 8192 00:23:16.335 } 00:23:16.335 } 00:23:16.335 ] 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "subsystem": "sock", 00:23:16.335 "config": [ 00:23:16.335 { 00:23:16.335 "method": "sock_impl_set_options", 00:23:16.335 "params": { 00:23:16.335 "enable_ktls": false, 00:23:16.335 "enable_placement_id": 0, 00:23:16.335 "enable_quickack": false, 00:23:16.335 "enable_recv_pipe": true, 00:23:16.335 "enable_zerocopy_send_client": false, 00:23:16.335 "enable_zerocopy_send_server": true, 00:23:16.335 "impl_name": "posix", 00:23:16.335 "recv_buf_size": 2097152, 00:23:16.335 "send_buf_size": 2097152, 00:23:16.335 "tls_version": 0, 00:23:16.335 "zerocopy_threshold": 0 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "sock_impl_set_options", 00:23:16.335 "params": { 00:23:16.335 "enable_ktls": false, 00:23:16.335 "enable_placement_id": 0, 00:23:16.335 "enable_quickack": false, 00:23:16.335 "enable_recv_pipe": true, 00:23:16.335 "enable_zerocopy_send_client": false, 00:23:16.335 "enable_zerocopy_send_server": true, 00:23:16.335 "impl_name": "ssl", 00:23:16.335 "recv_buf_size": 4096, 00:23:16.335 "send_buf_size": 4096, 00:23:16.335 "tls_version": 0, 00:23:16.335 "zerocopy_threshold": 0 00:23:16.335 } 00:23:16.335 } 00:23:16.335 ] 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "subsystem": "vmd", 00:23:16.335 "config": [] 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "subsystem": "accel", 00:23:16.335 "config": [ 00:23:16.335 { 00:23:16.335 "method": "accel_set_options", 00:23:16.335 "params": { 00:23:16.335 "buf_count": 2048, 00:23:16.335 "large_cache_size": 16, 00:23:16.335 "sequence_count": 2048, 00:23:16.335 "small_cache_size": 128, 00:23:16.335 "task_count": 2048 00:23:16.335 } 00:23:16.335 } 00:23:16.335 ] 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "subsystem": "bdev", 00:23:16.335 "config": [ 00:23:16.335 { 00:23:16.335 "method": "bdev_set_options", 00:23:16.335 "params": { 00:23:16.335 "bdev_auto_examine": true, 00:23:16.335 "bdev_io_cache_size": 256, 00:23:16.335 "bdev_io_pool_size": 65535, 00:23:16.335 "iobuf_large_cache_size": 16, 00:23:16.335 "iobuf_small_cache_size": 128 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "bdev_raid_set_options", 00:23:16.335 "params": { 00:23:16.335 "process_window_size_kb": 1024 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "bdev_iscsi_set_options", 00:23:16.335 "params": { 00:23:16.335 "timeout_sec": 30 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "bdev_nvme_set_options", 00:23:16.335 "params": { 00:23:16.335 "action_on_timeout": "none", 00:23:16.335 "allow_accel_sequence": false, 00:23:16.335 "arbitration_burst": 0, 00:23:16.335 "bdev_retry_count": 3, 00:23:16.335 "ctrlr_loss_timeout_sec": 0, 00:23:16.335 "delay_cmd_submit": true, 00:23:16.335 "dhchap_dhgroups": [ 00:23:16.335 "null", 00:23:16.335 "ffdhe2048", 00:23:16.335 "ffdhe3072", 00:23:16.335 "ffdhe4096", 00:23:16.335 "ffdhe6144", 00:23:16.335 "ffdhe8192" 00:23:16.335 ], 00:23:16.335 "dhchap_digests": [ 00:23:16.335 "sha256", 00:23:16.335 "sha384", 00:23:16.335 "sha512" 00:23:16.335 ], 00:23:16.335 "disable_auto_failback": false, 00:23:16.335 "fast_io_fail_timeout_sec": 0, 00:23:16.335 "generate_uuids": false, 00:23:16.335 "high_priority_weight": 0, 00:23:16.335 "io_path_stat": false, 00:23:16.335 "io_queue_requests": 0, 00:23:16.335 "keep_alive_timeout_ms": 10000, 00:23:16.335 "low_priority_weight": 0, 00:23:16.335 "medium_priority_weight": 0, 00:23:16.335 "nvme_adminq_poll_period_us": 10000, 00:23:16.335 "nvme_error_stat": false, 00:23:16.335 "nvme_ioq_poll_period_us": 0, 00:23:16.335 "rdma_cm_event_timeout_ms": 0, 00:23:16.335 "rdma_max_cq_size": 0, 00:23:16.335 "rdma_srq_size": 0, 00:23:16.335 "reconnect_delay_sec": 0, 00:23:16.335 "timeout_admin_us": 0, 00:23:16.335 "timeout_us": 0, 00:23:16.335 "transport_ack_timeout": 0, 00:23:16.335 "transport_retry_count": 4, 00:23:16.335 "transport_tos": 0 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "bdev_nvme_set_hotplug", 00:23:16.335 "params": { 00:23:16.335 "enable": false, 00:23:16.335 "period_us": 100000 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "bdev_malloc_create", 00:23:16.335 "params": { 00:23:16.335 "block_size": 4096, 00:23:16.335 "name": "malloc0", 00:23:16.335 "num_blocks": 8192, 00:23:16.335 "optimal_io_boundary": 0, 00:23:16.335 "physical_block_size": 4096, 00:23:16.335 "uuid": "e2472189-a538-4997-8ff5-cda067cf9fbd" 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "bdev_wait_for_examine" 00:23:16.335 } 00:23:16.335 ] 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "subsystem": "nbd", 00:23:16.335 "config": [] 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "subsystem": "scheduler", 00:23:16.335 "config": [ 00:23:16.335 { 00:23:16.335 "method": "framework_set_scheduler", 00:23:16.335 "params": { 00:23:16.335 "name": "static" 00:23:16.335 } 00:23:16.335 } 00:23:16.335 ] 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "subsystem": "nvmf", 00:23:16.335 "config": [ 00:23:16.335 { 00:23:16.335 "method": "nvmf_set_config", 00:23:16.335 "params": { 00:23:16.335 "admin_cmd_passthru": { 00:23:16.335 "identify_ctrlr": false 00:23:16.335 }, 00:23:16.335 "discovery_filter": "match_any" 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "nvmf_set_max_subsystems", 00:23:16.335 "params": { 00:23:16.335 "max_subsystems": 1024 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "nvmf_set_crdt", 00:23:16.335 "params": { 00:23:16.335 "crdt1": 0, 00:23:16.335 "crdt2": 0, 00:23:16.335 "crdt3": 0 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "nvmf_create_transport", 00:23:16.335 "params": { 00:23:16.335 "abort_timeout_sec": 1, 00:23:16.335 "ack_timeout": 0, 00:23:16.335 "buf_cache_size": 4294967295, 00:23:16.335 "c2h_success": false, 00:23:16.335 "data_wr_pool_size": 0, 00:23:16.335 "dif_insert_or_strip": false, 00:23:16.335 "in_capsule_data_size": 4096, 00:23:16.335 "io_unit_size": 131072, 00:23:16.335 "max_aq_depth": 128, 00:23:16.335 "max_io_qpairs_per_ctrlr": 127, 00:23:16.335 "max_io_size": 131072, 00:23:16.335 "max_queue_depth": 128, 00:23:16.335 "num_shared_buffers": 511, 00:23:16.335 "sock_priority": 0, 00:23:16.335 "trtype": "TCP", 00:23:16.335 "zcopy": false 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "nvmf_create_subsystem", 00:23:16.335 "params": { 00:23:16.335 "allow_any_host": false, 00:23:16.335 "ana_reporting": false, 00:23:16.335 "max_cntlid": 65519, 00:23:16.335 "max_namespaces": 32, 00:23:16.335 "min_cntlid": 1, 00:23:16.335 "model_number": "SPDK bdev Controller", 00:23:16.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.335 "serial_number": "00000000000000000000" 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "nvmf_subsystem_add_host", 00:23:16.335 "params": { 00:23:16.335 "host": "nqn.2016-06.io.spdk:host1", 00:23:16.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.335 "psk": "key0" 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "nvmf_subsystem_add_ns", 00:23:16.335 "params": { 00:23:16.335 "namespace": { 00:23:16.335 "bdev_name": "malloc0", 00:23:16.335 "nguid": "E2472189A53849978FF5CDA067CF9FBD", 00:23:16.335 "no_auto_visible": false, 00:23:16.335 "nsid": 1, 00:23:16.335 "uuid": "e2472189-a538-4997-8ff5-cda067cf9fbd" 00:23:16.335 }, 00:23:16.335 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:16.335 } 00:23:16.335 }, 00:23:16.335 { 00:23:16.335 "method": "nvmf_subsystem_add_listener", 00:23:16.335 "params": { 00:23:16.335 "listen_address": { 00:23:16.336 "adrfam": "IPv4", 00:23:16.336 "traddr": "10.0.0.2", 00:23:16.336 "trsvcid": "4420", 00:23:16.336 "trtype": "TCP" 00:23:16.336 }, 00:23:16.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.336 "secure_channel": true 00:23:16.336 } 00:23:16.336 } 00:23:16.336 ] 00:23:16.336 } 00:23:16.336 ] 00:23:16.336 }' 00:23:16.336 10:03:53 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:16.596 10:03:53 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:16.596 "subsystems": [ 00:23:16.596 { 00:23:16.596 "subsystem": "keyring", 00:23:16.596 "config": [ 00:23:16.596 { 00:23:16.596 "method": "keyring_file_add_key", 00:23:16.596 "params": { 00:23:16.596 "name": "key0", 00:23:16.596 "path": "/tmp/tmp.IQwxcjDtbv" 00:23:16.596 } 00:23:16.596 } 00:23:16.596 ] 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "subsystem": "iobuf", 00:23:16.596 "config": [ 00:23:16.596 { 00:23:16.596 "method": "iobuf_set_options", 00:23:16.596 "params": { 00:23:16.596 "large_bufsize": 135168, 00:23:16.596 "large_pool_count": 1024, 00:23:16.596 "small_bufsize": 8192, 00:23:16.596 "small_pool_count": 8192 00:23:16.596 } 00:23:16.596 } 00:23:16.596 ] 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "subsystem": "sock", 00:23:16.596 "config": [ 00:23:16.596 { 00:23:16.596 "method": "sock_impl_set_options", 00:23:16.596 "params": { 00:23:16.596 "enable_ktls": false, 00:23:16.596 "enable_placement_id": 0, 00:23:16.596 "enable_quickack": false, 00:23:16.596 "enable_recv_pipe": true, 00:23:16.596 "enable_zerocopy_send_client": false, 00:23:16.596 "enable_zerocopy_send_server": true, 00:23:16.596 "impl_name": "posix", 00:23:16.596 "recv_buf_size": 2097152, 00:23:16.596 "send_buf_size": 2097152, 00:23:16.596 "tls_version": 0, 00:23:16.596 "zerocopy_threshold": 0 00:23:16.596 } 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "method": "sock_impl_set_options", 00:23:16.596 "params": { 00:23:16.596 "enable_ktls": false, 00:23:16.596 "enable_placement_id": 0, 00:23:16.596 "enable_quickack": false, 00:23:16.596 "enable_recv_pipe": true, 00:23:16.596 "enable_zerocopy_send_client": false, 00:23:16.596 "enable_zerocopy_send_server": true, 00:23:16.596 "impl_name": "ssl", 00:23:16.596 "recv_buf_size": 4096, 00:23:16.596 "send_buf_size": 4096, 00:23:16.596 "tls_version": 0, 00:23:16.596 "zerocopy_threshold": 0 00:23:16.596 } 00:23:16.596 } 00:23:16.596 ] 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "subsystem": "vmd", 00:23:16.596 "config": [] 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "subsystem": "accel", 00:23:16.596 "config": [ 00:23:16.596 { 00:23:16.596 "method": "accel_set_options", 00:23:16.596 "params": { 00:23:16.596 "buf_count": 2048, 00:23:16.596 "large_cache_size": 16, 00:23:16.596 "sequence_count": 2048, 00:23:16.596 "small_cache_size": 128, 00:23:16.596 "task_count": 2048 00:23:16.596 } 00:23:16.596 } 00:23:16.596 ] 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "subsystem": "bdev", 00:23:16.596 "config": [ 00:23:16.596 { 00:23:16.596 "method": "bdev_set_options", 00:23:16.596 "params": { 00:23:16.596 "bdev_auto_examine": true, 00:23:16.596 "bdev_io_cache_size": 256, 00:23:16.596 "bdev_io_pool_size": 65535, 00:23:16.596 "iobuf_large_cache_size": 16, 00:23:16.596 "iobuf_small_cache_size": 128 00:23:16.596 } 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "method": "bdev_raid_set_options", 00:23:16.596 "params": { 00:23:16.596 "process_window_size_kb": 1024 00:23:16.596 } 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "method": "bdev_iscsi_set_options", 00:23:16.596 "params": { 00:23:16.596 "timeout_sec": 30 00:23:16.596 } 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "method": "bdev_nvme_set_options", 00:23:16.596 "params": { 00:23:16.596 "action_on_timeout": "none", 00:23:16.596 "allow_accel_sequence": false, 00:23:16.596 "arbitration_burst": 0, 00:23:16.596 "bdev_retry_count": 3, 00:23:16.596 "ctrlr_loss_timeout_sec": 0, 00:23:16.596 "delay_cmd_submit": true, 00:23:16.596 "dhchap_dhgroups": [ 00:23:16.596 "null", 00:23:16.596 "ffdhe2048", 00:23:16.596 "ffdhe3072", 00:23:16.596 "ffdhe4096", 00:23:16.596 "ffdhe6144", 00:23:16.596 "ffdhe8192" 00:23:16.596 ], 00:23:16.596 "dhchap_digests": [ 00:23:16.596 "sha256", 00:23:16.596 "sha384", 00:23:16.596 "sha512" 00:23:16.596 ], 00:23:16.596 "disable_auto_failback": false, 00:23:16.596 "fast_io_fail_timeout_sec": 0, 00:23:16.596 "generate_uuids": false, 00:23:16.596 "high_priority_weight": 0, 00:23:16.596 "io_path_stat": false, 00:23:16.596 "io_queue_requests": 512, 00:23:16.596 "keep_alive_timeout_ms": 10000, 00:23:16.596 "low_priority_weight": 0, 00:23:16.596 "medium_priority_weight": 0, 00:23:16.596 "nvme_adminq_poll_period_us": 10000, 00:23:16.596 "nvme_error_stat": false, 00:23:16.596 "nvme_ioq_poll_period_us": 0, 00:23:16.596 "rdma_cm_event_timeout_ms": 0, 00:23:16.596 "rdma_max_cq_size": 0, 00:23:16.596 "rdma_srq_size": 0, 00:23:16.596 "reconnect_delay_sec": 0, 00:23:16.596 "timeout_admin_us": 0, 00:23:16.596 "timeout_us": 0, 00:23:16.596 "transport_ack_timeout": 0, 00:23:16.596 "transport_retry_count": 4, 00:23:16.596 "transport_tos": 0 00:23:16.596 } 00:23:16.596 }, 00:23:16.596 { 00:23:16.596 "method": "bdev_nvme_attach_controller", 00:23:16.596 "params": { 00:23:16.596 "adrfam": "IPv4", 00:23:16.597 "ctrlr_loss_timeout_sec": 0, 00:23:16.597 "ddgst": false, 00:23:16.597 "fast_io_fail_timeout_sec": 0, 00:23:16.597 "hdgst": false, 00:23:16.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.597 "name": "nvme0", 00:23:16.597 "prchk_guard": false, 00:23:16.597 "prchk_reftag": false, 00:23:16.597 "psk": "key0", 00:23:16.597 "reconnect_delay_sec": 0, 00:23:16.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.597 "traddr": "10.0.0.2", 00:23:16.597 "trsvcid": "4420", 00:23:16.597 "trtype": "TCP" 00:23:16.597 } 00:23:16.597 }, 00:23:16.597 { 00:23:16.597 "method": "bdev_nvme_set_hotplug", 00:23:16.597 "params": { 00:23:16.597 "enable": false, 00:23:16.597 "period_us": 100000 00:23:16.597 } 00:23:16.597 }, 00:23:16.597 { 00:23:16.597 "method": "bdev_enable_histogram", 00:23:16.597 "params": { 00:23:16.597 "enable": true, 00:23:16.597 "name": "nvme0n1" 00:23:16.597 } 00:23:16.597 }, 00:23:16.597 { 00:23:16.597 "method": "bdev_wait_for_examine" 00:23:16.597 } 00:23:16.597 ] 00:23:16.597 }, 00:23:16.597 { 00:23:16.597 "subsystem": "nbd", 00:23:16.597 "config": [] 00:23:16.597 } 00:23:16.597 ] 00:23:16.597 }' 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 84775 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84775 ']' 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84775 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84775 00:23:16.597 killing process with pid 84775 00:23:16.597 Received shutdown signal, test time was about 1.000000 seconds 00:23:16.597 00:23:16.597 Latency(us) 00:23:16.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.597 =================================================================================================================== 00:23:16.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84775' 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84775 00:23:16.597 10:03:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84775 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 84725 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84725 ']' 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84725 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84725 00:23:17.226 killing process with pid 84725 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84725' 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84725 00:23:17.226 [2024-05-15 10:03:54.306973] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:17.226 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84725 00:23:17.485 10:03:54 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:17.485 10:03:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.485 10:03:54 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:17.485 "subsystems": [ 00:23:17.485 { 00:23:17.485 "subsystem": "keyring", 00:23:17.485 "config": [ 00:23:17.485 { 00:23:17.485 "method": "keyring_file_add_key", 00:23:17.485 "params": { 00:23:17.485 "name": "key0", 00:23:17.485 "path": "/tmp/tmp.IQwxcjDtbv" 00:23:17.485 } 00:23:17.485 } 00:23:17.485 ] 00:23:17.485 }, 00:23:17.485 { 00:23:17.485 "subsystem": "iobuf", 00:23:17.485 "config": [ 00:23:17.485 { 00:23:17.485 "method": "iobuf_set_options", 00:23:17.485 "params": { 00:23:17.485 "large_bufsize": 135168, 00:23:17.485 "large_pool_count": 1024, 00:23:17.485 "small_bufsize": 8192, 00:23:17.485 "small_pool_count": 8192 00:23:17.485 } 00:23:17.485 } 00:23:17.485 ] 00:23:17.485 }, 00:23:17.485 { 00:23:17.485 "subsystem": "sock", 00:23:17.485 "config": [ 00:23:17.485 { 00:23:17.485 "method": "sock_impl_set_options", 00:23:17.485 "params": { 00:23:17.485 "enable_ktls": false, 00:23:17.485 "enable_placement_id": 0, 00:23:17.485 "enable_quickack": false, 00:23:17.485 "enable_recv_pipe": true, 00:23:17.485 "enable_zerocopy_send_client": false, 00:23:17.485 "enable_zerocopy_send_server": true, 00:23:17.485 "impl_name": "posix", 00:23:17.485 "recv_buf_size": 2097152, 00:23:17.485 "send_buf_size": 2097152, 00:23:17.485 "tls_version": 0, 00:23:17.485 "zerocopy_threshold": 0 00:23:17.485 } 00:23:17.485 }, 00:23:17.485 { 00:23:17.485 "method": "sock_impl_set_options", 00:23:17.485 "params": { 00:23:17.485 "enable_ktls": false, 00:23:17.485 "enable_placement_id": 0, 00:23:17.485 "enable_quickack": false, 00:23:17.485 "enable_recv_pipe": true, 00:23:17.485 "enable_zerocopy_send_client": false, 00:23:17.485 "enable_zerocopy_send_server": true, 00:23:17.485 "impl_name": "ssl", 00:23:17.486 "recv_buf_size": 4096, 00:23:17.486 "send_buf_size": 4096, 00:23:17.486 "tls_version": 0, 00:23:17.486 "zerocopy_threshold": 0 00:23:17.486 } 00:23:17.486 } 00:23:17.486 ] 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "subsystem": "vmd", 00:23:17.486 "config": [] 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "subsystem": "accel", 00:23:17.486 "config": [ 00:23:17.486 { 00:23:17.486 "method": "accel_set_options", 00:23:17.486 "params": { 00:23:17.486 "buf_count": 2048, 00:23:17.486 "large_cache_size": 16, 00:23:17.486 "sequence_count": 2048, 00:23:17.486 "small_cache_size": 128, 00:23:17.486 "task_count": 2048 00:23:17.486 } 00:23:17.486 } 00:23:17.486 ] 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "subsystem": "bdev", 00:23:17.486 "config": [ 00:23:17.486 { 00:23:17.486 "method": "bdev_set_options", 00:23:17.486 "params": { 00:23:17.486 "bdev_auto_examine": true, 00:23:17.486 "bdev_io_cache_size": 256, 00:23:17.486 "bdev_io_pool_size": 65535, 00:23:17.486 "iobuf_large_cache_size": 16, 00:23:17.486 "iobuf_small_cache_size": 128 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "bdev_raid_set_options", 00:23:17.486 "params": { 00:23:17.486 "process_window_size_kb": 1024 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "bdev_iscsi_set_options", 00:23:17.486 "params": { 00:23:17.486 "timeout_sec": 30 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "bdev_nvme_set_options", 00:23:17.486 "params": { 00:23:17.486 "action_on_timeout": "none", 00:23:17.486 "allow_accel_sequence": false, 00:23:17.486 "arbitration_burst": 0, 00:23:17.486 "bdev_retry_count": 3, 00:23:17.486 "ctrlr_loss_timeout_sec": 0, 00:23:17.486 "delay_cmd_submit": true, 00:23:17.486 "dhchap_dhgroups": [ 00:23:17.486 "null", 00:23:17.486 "ffdhe2048", 00:23:17.486 "ffdhe3072", 00:23:17.486 "ffdhe4096", 00:23:17.486 "ffdhe6144", 00:23:17.486 "ffdhe8192" 00:23:17.486 ], 00:23:17.486 "dhchap_digests": [ 00:23:17.486 "sha256", 00:23:17.486 "sha384", 00:23:17.486 "sha512" 00:23:17.486 ], 00:23:17.486 "disable_auto_failback": false, 00:23:17.486 "fast_io_fail_timeout_sec": 0, 00:23:17.486 "generate_uuids": false, 00:23:17.486 "high_priority_weight": 0, 00:23:17.486 "io_path_stat": false, 00:23:17.486 "io_queue_requests": 0, 00:23:17.486 "keep_alive_timeout_ms": 10000, 00:23:17.486 "low_priority_weight": 0, 00:23:17.486 "medium_priority_weight": 0, 00:23:17.486 "nvme_adminq_poll_period_us": 10000, 00:23:17.486 "nvme_error_stat": false, 00:23:17.486 "nvme_ioq_poll_period_us": 0, 00:23:17.486 "rdma_cm_event_timeout_ms": 0, 00:23:17.486 "rdma_max_cq_size": 0, 00:23:17.486 "rdma_srq_size": 0, 00:23:17.486 "reconnect_delay_sec": 0, 00:23:17.486 "timeout_admin_us": 0, 00:23:17.486 "timeout_us": 0, 00:23:17.486 "transport_ack_timeout": 0, 00:23:17.486 "transport_retry_count": 4, 00:23:17.486 "transport_tos": 0 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "bdev_nvme_set_hotplug", 00:23:17.486 "params": { 00:23:17.486 "enable": false, 00:23:17.486 "period_us": 100000 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "bdev_malloc_create", 00:23:17.486 "params": { 00:23:17.486 "block_size": 4096, 00:23:17.486 "name": "malloc0", 00:23:17.486 "num_blocks": 8192, 00:23:17.486 "optimal_io_boundary": 0, 00:23:17.486 "physical_block_size": 4096, 00:23:17.486 "uuid": "e2472189-a538-4997-8ff5-cda067cf9fbd" 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "bdev_wait_for_examine" 00:23:17.486 } 00:23:17.486 ] 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "subsystem": "nbd", 00:23:17.486 "config": [] 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "subsystem": "scheduler", 00:23:17.486 "config": [ 00:23:17.486 { 00:23:17.486 "method": "framework_set_scheduler", 00:23:17.486 "params": { 00:23:17.486 "name": "static" 00:23:17.486 } 00:23:17.486 } 00:23:17.486 ] 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "subsystem": "nvmf", 00:23:17.486 "config": [ 00:23:17.486 { 00:23:17.486 "method": "nvmf_set_config", 00:23:17.486 "params": { 00:23:17.486 "admin_cmd_passthru": { 00:23:17.486 "identify_ctrlr": false 00:23:17.486 }, 00:23:17.486 "discovery_filter": "match_any" 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "nvmf_set_max_subsystems", 00:23:17.486 "params": { 00:23:17.486 "max_subsystems": 1024 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "nvmf_set_crdt", 00:23:17.486 "params": { 00:23:17.486 "crdt1": 0, 00:23:17.486 "crdt2": 0, 00:23:17.486 "crdt3": 0 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "nvmf_create_transport", 00:23:17.486 "params": { 00:23:17.486 "abort_timeout_sec": 1, 00:23:17.486 "ack_timeout": 0, 00:23:17.486 "buf_cache_size": 4294967295, 00:23:17.486 "c2h_success": false, 00:23:17.486 "data_wr_pool_size": 0, 00:23:17.486 "dif_insert_or_strip": false, 00:23:17.486 "in_capsule_data_size": 4096, 00:23:17.486 "io_unit_size": 131072, 00:23:17.486 "max_aq_depth": 128, 00:23:17.486 "max_io_qpairs_per_ctrlr": 127, 00:23:17.486 "max_io_size": 131072, 00:23:17.486 "max_queue_depth": 128, 00:23:17.486 "num_shared_buffers": 511, 00:23:17.486 "sock_priority": 0, 00:23:17.486 "trtype": "TCP", 00:23:17.486 "zcopy": false 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "nvmf_create_subsystem", 00:23:17.486 "params": { 00:23:17.486 "allow_any_host": false, 00:23:17.486 "ana_reporting": false, 00:23:17.486 "max_cntlid": 65519, 00:23:17.486 "max_namespaces": 32, 00:23:17.486 "min_cntlid": 1, 00:23:17.486 "model_number": "SPDK bdev Controller", 00:23:17.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.486 "serial_number": "00000000000000000000" 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "nvmf_subsystem_add_host", 00:23:17.486 "params": { 00:23:17.486 "host": "nqn.2016-06.io.spdk:host1", 00:23:17.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.486 "psk": "key0" 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "nvmf_subsystem_add_ns", 00:23:17.486 "params": { 00:23:17.486 "namespace": { 00:23:17.486 "bdev_name": "malloc0", 00:23:17.486 "nguid": "E2472189A53849978FF5CDA067CF9FBD", 00:23:17.486 "no_auto_visible": false, 00:23:17.486 "nsid": 1, 00:23:17.486 "uuid": "e2472189-a538-4997-8ff5-cda067cf9fbd" 00:23:17.486 }, 00:23:17.486 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:17.486 } 00:23:17.486 }, 00:23:17.486 { 00:23:17.486 "method": "nvmf_subsystem_add_listener", 00:23:17.486 "params": { 00:23:17.486 "listen_address": { 00:23:17.486 "adrfam": "IPv4", 00:23:17.486 "traddr": "10.0.0.2", 00:23:17.486 "trsvcid": "4420", 00:23:17.486 "trtype": "TCP" 00:23:17.486 }, 00:23:17.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.486 "secure_channel": true 00:23:17.486 } 00:23:17.486 } 00:23:17.486 ] 00:23:17.486 } 00:23:17.486 ] 00:23:17.486 }' 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84867 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84867 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84867 ']' 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:17.486 10:03:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.486 [2024-05-15 10:03:54.777890] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:17.486 [2024-05-15 10:03:54.778468] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.745 [2024-05-15 10:03:54.925676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.746 [2024-05-15 10:03:55.083358] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.746 [2024-05-15 10:03:55.083607] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.746 [2024-05-15 10:03:55.083737] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.746 [2024-05-15 10:03:55.083844] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.746 [2024-05-15 10:03:55.083877] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.746 [2024-05-15 10:03:55.084032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.004 [2024-05-15 10:03:55.359759] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.263 [2024-05-15 10:03:55.391664] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:18.263 [2024-05-15 10:03:55.392048] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.263 [2024-05-15 10:03:55.392392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.521 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:18.521 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:18.521 10:03:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.521 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:18.521 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=84911 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 84911 /var/tmp/bdevperf.sock 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 84911 ']' 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:18.781 10:03:55 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:18.781 "subsystems": [ 00:23:18.781 { 00:23:18.781 "subsystem": "keyring", 00:23:18.781 "config": [ 00:23:18.781 { 00:23:18.781 "method": "keyring_file_add_key", 00:23:18.781 "params": { 00:23:18.781 "name": "key0", 00:23:18.781 "path": "/tmp/tmp.IQwxcjDtbv" 00:23:18.781 } 00:23:18.781 } 00:23:18.781 ] 00:23:18.781 }, 00:23:18.781 { 00:23:18.781 "subsystem": "iobuf", 00:23:18.781 "config": [ 00:23:18.781 { 00:23:18.781 "method": "iobuf_set_options", 00:23:18.781 "params": { 00:23:18.781 "large_bufsize": 135168, 00:23:18.781 "large_pool_count": 1024, 00:23:18.781 "small_bufsize": 8192, 00:23:18.781 "small_pool_count": 8192 00:23:18.781 } 00:23:18.781 } 00:23:18.781 ] 00:23:18.781 }, 00:23:18.781 { 00:23:18.781 "subsystem": "sock", 00:23:18.781 "config": [ 00:23:18.781 { 00:23:18.781 "method": "sock_impl_set_options", 00:23:18.781 "params": { 00:23:18.781 "enable_ktls": false, 00:23:18.781 "enable_placement_id": 0, 00:23:18.781 "enable_quickack": false, 00:23:18.781 "enable_recv_pipe": true, 00:23:18.781 "enable_zerocopy_send_client": false, 00:23:18.781 "enable_zerocopy_send_server": true, 00:23:18.781 "impl_name": "posix", 00:23:18.781 "recv_buf_size": 2097152, 00:23:18.781 "send_buf_size": 2097152, 00:23:18.781 "tls_version": 0, 00:23:18.781 "zerocopy_threshold": 0 00:23:18.781 } 00:23:18.781 }, 00:23:18.781 { 00:23:18.781 "method": "sock_impl_set_options", 00:23:18.781 "params": { 00:23:18.781 "enable_ktls": false, 00:23:18.781 "enable_placement_id": 0, 00:23:18.781 "enable_quickack": false, 00:23:18.781 "enable_recv_pipe": true, 00:23:18.781 "enable_zerocopy_send_client": false, 00:23:18.781 "enable_zerocopy_send_server": true, 00:23:18.781 "impl_name": "ssl", 00:23:18.781 "recv_buf_size": 4096, 00:23:18.781 "send_buf_size": 4096, 00:23:18.781 "tls_version": 0, 00:23:18.781 "zerocopy_threshold": 0 00:23:18.781 } 00:23:18.781 } 00:23:18.781 ] 00:23:18.781 }, 00:23:18.781 { 00:23:18.781 "subsystem": "vmd", 00:23:18.781 "config": [] 00:23:18.781 }, 00:23:18.781 { 00:23:18.781 "subsystem": "accel", 00:23:18.781 "config": [ 00:23:18.781 { 00:23:18.781 "method": "accel_set_options", 00:23:18.781 "params": { 00:23:18.781 "buf_count": 2048, 00:23:18.781 "large_cache_size": 16, 00:23:18.781 "sequence_count": 2048, 00:23:18.781 "small_cache_size": 128, 00:23:18.781 "task_count": 2048 00:23:18.781 } 00:23:18.781 } 00:23:18.781 ] 00:23:18.781 }, 00:23:18.781 { 00:23:18.781 "subsystem": "bdev", 00:23:18.781 "config": [ 00:23:18.781 { 00:23:18.781 "method": "bdev_set_options", 00:23:18.781 "params": { 00:23:18.781 "bdev_auto_examine": true, 00:23:18.781 "bdev_io_cache_size": 256, 00:23:18.781 "bdev_io_pool_size": 65535, 00:23:18.781 "iobuf_large_cache_size": 16, 00:23:18.781 "iobuf_small_cache_size": 128 00:23:18.781 } 00:23:18.781 }, 00:23:18.782 { 00:23:18.782 "method": "bdev_raid_set_options", 00:23:18.782 "params": { 00:23:18.782 "process_window_size_kb": 1024 00:23:18.782 } 00:23:18.782 }, 00:23:18.782 { 00:23:18.782 "method": "bdev_iscsi_set_options", 00:23:18.782 "params": { 00:23:18.782 "timeout_sec": 30 00:23:18.782 } 00:23:18.782 }, 00:23:18.782 { 00:23:18.782 "method": "bdev_nvme_set_options", 00:23:18.782 "params": { 00:23:18.782 "action_on_timeout": "none", 00:23:18.782 "allow_accel_sequence": false, 00:23:18.782 "arbitration_burst": 0, 00:23:18.782 "bdev_retry_count": 3, 00:23:18.782 "ctrlr_loss_timeout_sec": 0, 00:23:18.782 "delay_cmd_submit": true, 00:23:18.782 "dhchap_dhgroups": [ 00:23:18.782 "null", 00:23:18.782 "ffdhe2048", 00:23:18.782 "ffdhe3072", 00:23:18.782 "ffdhe4096", 00:23:18.782 "ffdhe6144", 00:23:18.782 "ffdhe8192" 00:23:18.782 ], 00:23:18.782 "dhchap_digests": [ 00:23:18.782 "sha256", 00:23:18.782 "sha384", 00:23:18.782 "sha512" 00:23:18.782 ], 00:23:18.782 "disable_auto_failback": false, 00:23:18.782 "fast_io_fail_timeout_sec": 0, 00:23:18.782 "generate_uuids": false, 00:23:18.782 "high_priority_weight": 0, 00:23:18.782 "io_path_stat": false, 00:23:18.782 "io_queue_requests": 512, 00:23:18.782 "keep_alive_timeout_ms": 10000, 00:23:18.782 "low_priority_weight": 0, 00:23:18.782 "medium_priority_weight": 0, 00:23:18.782 "nvme_adminq_poll_period_us": 10000, 00:23:18.782 "nvme_error_stat": false, 00:23:18.782 "nvme_ioq_poll_period_us": 0, 00:23:18.782 "rdma_cm_event_timeout_ms": 0, 00:23:18.782 "rdma_max_cq_size": 0, 00:23:18.782 "rdma_srq_size": 0, 00:23:18.782 "reconnect_delay_sec": 0, 00:23:18.782 "timeout_admin_us": 0, 00:23:18.782 "timeout_us": 0, 00:23:18.782 "transport_ack_timeout": 0, 00:23:18.782 "transport_retry_count": 4, 00:23:18.782 "transport_tos": 0 00:23:18.782 } 00:23:18.782 }, 00:23:18.782 { 00:23:18.782 "method": "bdev_nvme_attach_controller", 00:23:18.782 "params": { 00:23:18.782 "adrfam": "IPv4", 00:23:18.782 "ctrlr_loss_timeout_sec": 0, 00:23:18.782 "ddgst": false, 00:23:18.782 "fast_io_fail_timeout_sec": 0, 00:23:18.782 "hdgst": false, 00:23:18.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.782 "name": "nvme0", 00:23:18.782 "prchk_guard": false, 00:23:18.782 "prchk_reftag": false, 00:23:18.782 "psk": "key0", 00:23:18.782 "reconnect_delay_sec": 0, 00:23:18.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.782 "traddr": "10.0.0.2", 00:23:18.782 "trsvcid": "4420", 00:23:18.782 "trtype": "TCP" 00:23:18.782 } 00:23:18.782 }, 00:23:18.782 { 00:23:18.782 "method": "bdev_nvme_set_hotplug", 00:23:18.782 "params": { 00:23:18.782 "enable": false, 00:23:18.782 "period_us": 100000 00:23:18.782 } 00:23:18.782 }, 00:23:18.782 { 00:23:18.782 "method": "bdev_enable_histogram", 00:23:18.782 "params": { 00:23:18.782 "enable": true, 00:23:18.782 "name": "nvme0n1" 00:23:18.782 } 00:23:18.782 }, 00:23:18.782 { 00:23:18.782 "method": "bdev_wait_for_examine" 00:23:18.782 } 00:23:18.782 ] 00:23:18.782 }, 00:23:18.782 { 00:23:18.782 "subsystem": "nbd", 00:23:18.782 "config": [] 00:23:18.782 } 00:23:18.782 ] 00:23:18.782 }' 00:23:18.782 10:03:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.782 [2024-05-15 10:03:56.010708] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:18.782 [2024-05-15 10:03:56.011133] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84911 ] 00:23:18.782 [2024-05-15 10:03:56.159631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.051 [2024-05-15 10:03:56.340698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.332 [2024-05-15 10:03:56.551931] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.899 10:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:19.899 10:03:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:19.899 10:03:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.899 10:03:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:19.899 10:03:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.900 10:03:57 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:20.158 Running I/O for 1 seconds... 00:23:21.094 00:23:21.094 Latency(us) 00:23:21.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.094 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:21.094 Verification LBA range: start 0x0 length 0x2000 00:23:21.094 nvme0n1 : 1.01 5552.59 21.69 0.00 0.00 22874.20 4805.97 18599.74 00:23:21.094 =================================================================================================================== 00:23:21.094 Total : 5552.59 21.69 0.00 0.00 22874.20 4805.97 18599.74 00:23:21.094 0 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:23:21.094 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:21.094 nvmf_trace.0 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 84911 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84911 ']' 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84911 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84911 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84911' 00:23:21.353 killing process with pid 84911 00:23:21.353 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84911 00:23:21.353 Received shutdown signal, test time was about 1.000000 seconds 00:23:21.353 00:23:21.353 Latency(us) 00:23:21.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.353 =================================================================================================================== 00:23:21.353 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.354 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84911 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.612 rmmod nvme_tcp 00:23:21.612 rmmod nvme_fabrics 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 84867 ']' 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 84867 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 84867 ']' 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 84867 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:21.612 10:03:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 84867 00:23:21.870 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:21.870 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:21.870 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 84867' 00:23:21.870 killing process with pid 84867 00:23:21.870 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 84867 00:23:21.870 [2024-05-15 10:03:59.017032] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:21.870 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 84867 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aZUL0oRbjJ /tmp/tmp.BkXrnvaIy9 /tmp/tmp.IQwxcjDtbv 00:23:22.130 00:23:22.130 real 1m32.810s 00:23:22.130 user 2m29.774s 00:23:22.130 sys 0m29.962s 00:23:22.130 ************************************ 00:23:22.130 END TEST nvmf_tls 00:23:22.130 ************************************ 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:22.130 10:03:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.130 10:03:59 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:22.130 10:03:59 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:22.130 10:03:59 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:22.130 10:03:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.405 ************************************ 00:23:22.405 START TEST nvmf_fips 00:23:22.405 ************************************ 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:22.405 * Looking for test storage... 00:23:22.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.405 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:22.406 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:23:22.664 Error setting digest 00:23:22.664 00826546DD7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:22.664 00826546DD7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:22.664 Cannot find device "nvmf_tgt_br" 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:22.664 Cannot find device "nvmf_tgt_br2" 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:22.664 Cannot find device "nvmf_tgt_br" 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:22.664 Cannot find device "nvmf_tgt_br2" 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:22.664 10:03:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:22.664 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.664 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:23:22.664 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.664 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:23:22.664 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:22.664 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:22.664 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:22.664 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:22.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:23:22.923 00:23:22.923 --- 10.0.0.2 ping statistics --- 00:23:22.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.923 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:22.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:22.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:23:22.923 00:23:22.923 --- 10.0.0.3 ping statistics --- 00:23:22.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.923 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:22.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:22.923 00:23:22.923 --- 10.0.0.1 ping statistics --- 00:23:22.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.923 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85207 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85207 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 85207 ']' 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:22.923 10:04:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:23.181 [2024-05-15 10:04:00.366149] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:23.181 [2024-05-15 10:04:00.366523] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.181 [2024-05-15 10:04:00.511083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.439 [2024-05-15 10:04:00.666248] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.439 [2024-05-15 10:04:00.666523] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.439 [2024-05-15 10:04:00.666715] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.439 [2024-05-15 10:04:00.666797] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.439 [2024-05-15 10:04:00.666835] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.439 [2024-05-15 10:04:00.666971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.007 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:24.007 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:23:24.007 10:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.007 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:24.007 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:24.266 10:04:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.266 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:24.266 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:24.267 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:24.267 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:24.267 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:24.267 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:24.267 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:24.267 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:24.267 [2024-05-15 10:04:01.631795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.267 [2024-05-15 10:04:01.647715] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:24.267 [2024-05-15 10:04:01.648036] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.267 [2024-05-15 10:04:01.648499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.526 [2024-05-15 10:04:01.684377] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:24.526 malloc0 00:23:24.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85259 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85259 /var/tmp/bdevperf.sock 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 85259 ']' 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:24.526 10:04:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:24.526 [2024-05-15 10:04:01.805693] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:24.526 [2024-05-15 10:04:01.806055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85259 ] 00:23:24.785 [2024-05-15 10:04:01.956415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.785 [2024-05-15 10:04:02.145295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.719 10:04:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:25.720 10:04:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:23:25.720 10:04:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:25.976 [2024-05-15 10:04:03.115708] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.976 [2024-05-15 10:04:03.116138] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:25.976 TLSTESTn1 00:23:25.976 10:04:03 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.976 Running I/O for 10 seconds... 00:23:35.991 00:23:35.991 Latency(us) 00:23:35.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.991 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:35.991 Verification LBA range: start 0x0 length 0x2000 00:23:35.991 TLSTESTn1 : 10.01 5293.76 20.68 0.00 0.00 24138.12 4181.82 27462.70 00:23:35.991 =================================================================================================================== 00:23:35.991 Total : 5293.76 20.68 0.00 0.00 24138.12 4181.82 27462.70 00:23:35.991 0 00:23:35.991 10:04:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:35.991 10:04:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:35.991 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:23:35.991 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:23:35.991 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:23:35.991 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:36.250 nvmf_trace.0 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85259 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 85259 ']' 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 85259 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 85259 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 85259' 00:23:36.250 killing process with pid 85259 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 85259 00:23:36.250 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.250 00:23:36.250 Latency(us) 00:23:36.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.250 =================================================================================================================== 00:23:36.250 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.250 [2024-05-15 10:04:13.519324] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:36.250 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 85259 00:23:36.509 10:04:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:36.509 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.509 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:36.768 rmmod nvme_tcp 00:23:36.768 rmmod nvme_fabrics 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85207 ']' 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85207 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 85207 ']' 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 85207 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 85207 00:23:36.768 killing process with pid 85207 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 85207' 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 85207 00:23:36.768 [2024-05-15 10:04:13.993306] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:36.768 [2024-05-15 10:04:13.993364] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:36.768 10:04:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 85207 00:23:37.026 10:04:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:37.026 10:04:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:37.026 10:04:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:37.026 10:04:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.026 10:04:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.026 10:04:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.026 10:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.026 10:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.285 10:04:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:37.285 10:04:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:37.285 ************************************ 00:23:37.285 END TEST nvmf_fips 00:23:37.285 ************************************ 00:23:37.285 00:23:37.285 real 0m14.919s 00:23:37.285 user 0m21.346s 00:23:37.285 sys 0m5.610s 00:23:37.285 10:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:37.285 10:04:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:37.285 10:04:14 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:23:37.285 10:04:14 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:23:37.285 10:04:14 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:37.285 10:04:14 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:37.285 10:04:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:37.285 10:04:14 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:37.285 10:04:14 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:37.285 10:04:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:37.285 10:04:14 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:23:37.285 10:04:14 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:37.285 10:04:14 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:37.285 10:04:14 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:37.285 10:04:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:37.285 ************************************ 00:23:37.285 START TEST nvmf_multicontroller 00:23:37.285 ************************************ 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:37.285 * Looking for test storage... 00:23:37.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.285 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:37.544 10:04:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.544 10:04:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.544 10:04:14 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.544 10:04:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.544 10:04:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:37.545 Cannot find device "nvmf_tgt_br" 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.545 Cannot find device "nvmf_tgt_br2" 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:37.545 Cannot find device "nvmf_tgt_br" 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:37.545 Cannot find device "nvmf_tgt_br2" 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.545 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:37.545 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:37.804 10:04:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:37.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:23:37.804 00:23:37.804 --- 10.0.0.2 ping statistics --- 00:23:37.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.804 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:37.804 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.804 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:23:37.804 00:23:37.804 --- 10.0.0.3 ping statistics --- 00:23:37.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.804 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:37.804 00:23:37.804 --- 10.0.0.1 ping statistics --- 00:23:37.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.804 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85622 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85622 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 85622 ']' 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:37.804 10:04:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.804 [2024-05-15 10:04:15.160410] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:37.805 [2024-05-15 10:04:15.160750] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.063 [2024-05-15 10:04:15.324237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:38.320 [2024-05-15 10:04:15.498474] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.320 [2024-05-15 10:04:15.498733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.320 [2024-05-15 10:04:15.498877] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.321 [2024-05-15 10:04:15.498997] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.321 [2024-05-15 10:04:15.499054] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.321 [2024-05-15 10:04:15.499262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.321 [2024-05-15 10:04:15.499734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:38.321 [2024-05-15 10:04:15.499738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.887 [2024-05-15 10:04:16.220525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:38.887 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 Malloc0 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 [2024-05-15 10:04:16.299709] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:39.147 [2024-05-15 10:04:16.300334] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 [2024-05-15 10:04:16.307886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 Malloc1 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.147 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85674 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85674 /var/tmp/bdevperf.sock 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 85674 ']' 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:39.148 10:04:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.564 NVMe0n1 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.564 1 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.564 2024/05/15 10:04:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:40.564 request: 00:23:40.564 { 00:23:40.564 "method": "bdev_nvme_attach_controller", 00:23:40.564 "params": { 00:23:40.564 "name": "NVMe0", 00:23:40.564 "trtype": "tcp", 00:23:40.564 "traddr": "10.0.0.2", 00:23:40.564 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:40.564 "hostaddr": "10.0.0.2", 00:23:40.564 "hostsvcid": "60000", 00:23:40.564 "adrfam": "ipv4", 00:23:40.564 "trsvcid": "4420", 00:23:40.564 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:23:40.564 } 00:23:40.564 } 00:23:40.564 Got JSON-RPC error response 00:23:40.564 GoRPCClient: error on JSON-RPC call 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.564 2024/05/15 10:04:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:40.564 request: 00:23:40.564 { 00:23:40.564 "method": "bdev_nvme_attach_controller", 00:23:40.564 "params": { 00:23:40.564 "name": "NVMe0", 00:23:40.564 "trtype": "tcp", 00:23:40.564 "traddr": "10.0.0.2", 00:23:40.564 "hostaddr": "10.0.0.2", 00:23:40.564 "hostsvcid": "60000", 00:23:40.564 "adrfam": "ipv4", 00:23:40.564 "trsvcid": "4420", 00:23:40.564 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:23:40.564 } 00:23:40.564 } 00:23:40.564 Got JSON-RPC error response 00:23:40.564 GoRPCClient: error on JSON-RPC call 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.564 2024/05/15 10:04:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:23:40.564 request: 00:23:40.564 { 00:23:40.564 "method": "bdev_nvme_attach_controller", 00:23:40.564 "params": { 00:23:40.564 "name": "NVMe0", 00:23:40.564 "trtype": "tcp", 00:23:40.564 "traddr": "10.0.0.2", 00:23:40.564 "hostaddr": "10.0.0.2", 00:23:40.564 "hostsvcid": "60000", 00:23:40.564 "adrfam": "ipv4", 00:23:40.564 "trsvcid": "4420", 00:23:40.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.564 "multipath": "disable" 00:23:40.564 } 00:23:40.564 } 00:23:40.564 Got JSON-RPC error response 00:23:40.564 GoRPCClient: error on JSON-RPC call 00:23:40.564 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.565 2024/05/15 10:04:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:40.565 request: 00:23:40.565 { 00:23:40.565 "method": "bdev_nvme_attach_controller", 00:23:40.565 "params": { 00:23:40.565 "name": "NVMe0", 00:23:40.565 "trtype": "tcp", 00:23:40.565 "traddr": "10.0.0.2", 00:23:40.565 "hostaddr": "10.0.0.2", 00:23:40.565 "hostsvcid": "60000", 00:23:40.565 "adrfam": "ipv4", 00:23:40.565 "trsvcid": "4420", 00:23:40.565 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.565 "multipath": "failover" 00:23:40.565 } 00:23:40.565 } 00:23:40.565 Got JSON-RPC error response 00:23:40.565 GoRPCClient: error on JSON-RPC call 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.565 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.565 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:40.565 10:04:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:41.939 0 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85674 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 85674 ']' 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 85674 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 85674 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 85674' 00:23:41.939 killing process with pid 85674 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 85674 00:23:41.939 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 85674 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:23:42.197 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:42.197 [2024-05-15 10:04:16.448821] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:42.197 [2024-05-15 10:04:16.448978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85674 ] 00:23:42.197 [2024-05-15 10:04:16.588136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.197 [2024-05-15 10:04:16.774252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.197 [2024-05-15 10:04:17.849470] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 6923a847-874c-4b6a-bb6d-ef02ccbb0e50 already exists 00:23:42.197 [2024-05-15 10:04:17.849582] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:6923a847-874c-4b6a-bb6d-ef02ccbb0e50 alias for bdev NVMe1n1 00:23:42.197 [2024-05-15 10:04:17.849601] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:42.197 Running I/O for 1 seconds... 00:23:42.197 00:23:42.197 Latency(us) 00:23:42.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.197 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:42.197 NVMe0n1 : 1.01 21529.69 84.10 0.00 0.00 5935.82 5242.88 13918.60 00:23:42.197 =================================================================================================================== 00:23:42.197 Total : 21529.69 84.10 0.00 0.00 5935.82 5242.88 13918.60 00:23:42.197 Received shutdown signal, test time was about 1.000000 seconds 00:23:42.197 00:23:42.197 Latency(us) 00:23:42.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.197 =================================================================================================================== 00:23:42.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.197 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:42.197 rmmod nvme_tcp 00:23:42.197 rmmod nvme_fabrics 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85622 ']' 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85622 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 85622 ']' 00:23:42.197 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 85622 00:23:42.455 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:23:42.455 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:42.455 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 85622 00:23:42.455 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:42.455 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:42.455 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 85622' 00:23:42.455 killing process with pid 85622 00:23:42.455 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 85622 00:23:42.455 [2024-05-15 10:04:19.606258] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 10:04:19 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 85622 00:23:42.455 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:42.713 10:04:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:42.713 10:04:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:42.713 10:04:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:42.713 10:04:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:42.713 10:04:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:42.713 10:04:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.713 10:04:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.713 10:04:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.971 10:04:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:42.971 00:23:42.971 real 0m5.557s 00:23:42.971 user 0m16.750s 00:23:42.971 sys 0m1.443s 00:23:42.971 10:04:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:42.971 10:04:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:42.971 ************************************ 00:23:42.971 END TEST nvmf_multicontroller 00:23:42.971 ************************************ 00:23:42.971 10:04:20 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:42.971 10:04:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:42.971 10:04:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:42.971 10:04:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.971 ************************************ 00:23:42.971 START TEST nvmf_aer 00:23:42.971 ************************************ 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:42.971 * Looking for test storage... 00:23:42.971 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.971 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:42.972 Cannot find device "nvmf_tgt_br" 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:23:42.972 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.229 Cannot find device "nvmf_tgt_br2" 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:43.229 Cannot find device "nvmf_tgt_br" 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:43.229 Cannot find device "nvmf_tgt_br2" 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:43.229 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:43.230 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:43.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:23:43.488 00:23:43.488 --- 10.0.0.2 ping statistics --- 00:23:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.488 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:43.488 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.488 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:23:43.488 00:23:43.488 --- 10.0.0.3 ping statistics --- 00:23:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.488 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:23:43.488 00:23:43.488 --- 10.0.0.1 ping statistics --- 00:23:43.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.488 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=85925 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 85925 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 85925 ']' 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:43.488 10:04:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:43.488 [2024-05-15 10:04:20.807693] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:43.488 [2024-05-15 10:04:20.808073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.746 [2024-05-15 10:04:20.972789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.006 [2024-05-15 10:04:21.166636] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.006 [2024-05-15 10:04:21.166941] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.006 [2024-05-15 10:04:21.167107] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.006 [2024-05-15 10:04:21.167250] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.006 [2024-05-15 10:04:21.167296] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.006 [2024-05-15 10:04:21.167546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.006 [2024-05-15 10:04:21.167611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.006 [2024-05-15 10:04:21.168371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.006 [2024-05-15 10:04:21.168377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 [2024-05-15 10:04:21.821060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 Malloc0 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 [2024-05-15 10:04:21.902750] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:44.574 [2024-05-15 10:04:21.903072] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:44.574 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:44.574 [ 00:23:44.574 { 00:23:44.574 "allow_any_host": true, 00:23:44.574 "hosts": [], 00:23:44.574 "listen_addresses": [], 00:23:44.574 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:44.574 "subtype": "Discovery" 00:23:44.574 }, 00:23:44.574 { 00:23:44.574 "allow_any_host": true, 00:23:44.574 "hosts": [], 00:23:44.574 "listen_addresses": [ 00:23:44.574 { 00:23:44.574 "adrfam": "IPv4", 00:23:44.574 "traddr": "10.0.0.2", 00:23:44.574 "trsvcid": "4420", 00:23:44.574 "trtype": "TCP" 00:23:44.574 } 00:23:44.574 ], 00:23:44.574 "max_cntlid": 65519, 00:23:44.574 "max_namespaces": 2, 00:23:44.574 "min_cntlid": 1, 00:23:44.574 "model_number": "SPDK bdev Controller", 00:23:44.574 "namespaces": [ 00:23:44.574 { 00:23:44.574 "bdev_name": "Malloc0", 00:23:44.574 "name": "Malloc0", 00:23:44.574 "nguid": "48580D19B3EA4F4CA47FCD67D9AEA217", 00:23:44.574 "nsid": 1, 00:23:44.574 "uuid": "48580d19-b3ea-4f4c-a47f-cd67d9aea217" 00:23:44.574 } 00:23:44.574 ], 00:23:44.574 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.574 "serial_number": "SPDK00000000000001", 00:23:44.574 "subtype": "NVMe" 00:23:44.574 } 00:23:44.574 ] 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=85985 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:23:44.575 10:04:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:23:44.833 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.833 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:23:44.833 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:23:44.833 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:23:44.833 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:44.833 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 2 -lt 200 ']' 00:23:44.833 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=3 00:23:44.833 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.091 Malloc1 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.091 [ 00:23:45.091 { 00:23:45.091 "allow_any_host": true, 00:23:45.091 "hosts": [], 00:23:45.091 "listen_addresses": [], 00:23:45.091 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:45.091 "subtype": "Discovery" 00:23:45.091 }, 00:23:45.091 { 00:23:45.091 "allow_any_host": true, 00:23:45.091 "hosts": [], 00:23:45.091 "listen_addresses": [ 00:23:45.091 { 00:23:45.091 "adrfam": "IPv4", 00:23:45.091 "traddr": "10.0.0.2", 00:23:45.091 "trsvcid": "4420", 00:23:45.091 "trtype": "TCP" 00:23:45.091 } 00:23:45.091 ], 00:23:45.091 "max_cntlid": 65519, 00:23:45.091 "max_namespaces": 2, 00:23:45.091 "min_cntlid": 1, 00:23:45.091 "model_number": "SPDK bdev Controller", 00:23:45.091 "namespaces": [ 00:23:45.091 { 00:23:45.091 "bdev_name": "Malloc0", 00:23:45.091 "name": "Malloc0", 00:23:45.091 "nguid": "48580D19B3EA4F4CA47FCD67D9AEA217", 00:23:45.091 "nsid": 1, 00:23:45.091 "uuid": "48580d19-b3ea-4f4c-a47f-cd67d9aea217" 00:23:45.091 }, 00:23:45.091 { 00:23:45.091 "bdev_name": "Malloc1", 00:23:45.091 "name": "Malloc1", 00:23:45.091 "nguid": "07546534AA1B4D01A735D2DBC21BFC8B", 00:23:45.091 "nsid": 2, 00:23:45.091 "uuid": "07546534-aa1b-4d01-a735-d2dbc21bfc8b" 00:23:45.091 } 00:23:45.091 ], 00:23:45.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.091 "serial_number": "SPDK00000000000001", 00:23:45.091 "subtype": "NVMe" 00:23:45.091 } 00:23:45.091 ] 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 85985 00:23:45.091 Asynchronous Event Request test 00:23:45.091 Attaching to 10.0.0.2 00:23:45.091 Attached to 10.0.0.2 00:23:45.091 Registering asynchronous event callbacks... 00:23:45.091 Starting namespace attribute notice tests for all controllers... 00:23:45.091 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:45.091 aer_cb - Changed Namespace 00:23:45.091 Cleaning up... 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.091 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.350 rmmod nvme_tcp 00:23:45.350 rmmod nvme_fabrics 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 85925 ']' 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 85925 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 85925 ']' 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 85925 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 85925 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:45.350 killing process with pid 85925 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 85925' 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 85925 00:23:45.350 [2024-05-15 10:04:22.557713] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:45.350 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 85925 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.608 10:04:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:45.868 00:23:45.868 real 0m2.858s 00:23:45.868 user 0m6.867s 00:23:45.868 sys 0m0.912s 00:23:45.868 10:04:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:45.868 ************************************ 00:23:45.868 END TEST nvmf_aer 00:23:45.868 10:04:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:45.868 ************************************ 00:23:45.868 10:04:23 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:45.868 10:04:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:45.868 10:04:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:45.868 10:04:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:45.868 ************************************ 00:23:45.868 START TEST nvmf_async_init 00:23:45.868 ************************************ 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:45.868 * Looking for test storage... 00:23:45.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=668bb0ce5d394fcab0e1fd81c935ff2f 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:45.868 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:46.126 Cannot find device "nvmf_tgt_br" 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:46.126 Cannot find device "nvmf_tgt_br2" 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:46.126 Cannot find device "nvmf_tgt_br" 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:46.126 Cannot find device "nvmf_tgt_br2" 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:46.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:46.126 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:46.126 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:46.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:23:46.385 00:23:46.385 --- 10.0.0.2 ping statistics --- 00:23:46.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.385 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:46.385 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:46.385 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:23:46.385 00:23:46.385 --- 10.0.0.3 ping statistics --- 00:23:46.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.385 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:46.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:46.385 00:23:46.385 --- 10.0.0.1 ping statistics --- 00:23:46.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.385 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86153 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86153 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 86153 ']' 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:46.385 10:04:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:46.385 [2024-05-15 10:04:23.733521] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:46.385 [2024-05-15 10:04:23.734270] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:46.644 [2024-05-15 10:04:23.886629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.903 [2024-05-15 10:04:24.059302] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.903 [2024-05-15 10:04:24.059373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.903 [2024-05-15 10:04:24.059389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.903 [2024-05-15 10:04:24.059402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.903 [2024-05-15 10:04:24.059414] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.903 [2024-05-15 10:04:24.059452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.468 [2024-05-15 10:04:24.807927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.468 null0 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:47.468 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 668bb0ce5d394fcab0e1fd81c935ff2f 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.469 [2024-05-15 10:04:24.847840] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:47.469 [2024-05-15 10:04:24.848161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:47.469 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.727 10:04:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:47.727 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.727 10:04:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.727 nvme0n1 00:23:47.727 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.727 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:47.727 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.727 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.727 [ 00:23:47.727 { 00:23:47.727 "aliases": [ 00:23:47.727 "668bb0ce-5d39-4fca-b0e1-fd81c935ff2f" 00:23:47.727 ], 00:23:47.727 "assigned_rate_limits": { 00:23:47.727 "r_mbytes_per_sec": 0, 00:23:47.727 "rw_ios_per_sec": 0, 00:23:47.727 "rw_mbytes_per_sec": 0, 00:23:47.727 "w_mbytes_per_sec": 0 00:23:47.727 }, 00:23:47.727 "block_size": 512, 00:23:47.727 "claimed": false, 00:23:47.727 "driver_specific": { 00:23:47.727 "mp_policy": "active_passive", 00:23:47.727 "nvme": [ 00:23:47.727 { 00:23:47.727 "ctrlr_data": { 00:23:47.727 "ana_reporting": false, 00:23:47.727 "cntlid": 1, 00:23:47.727 "firmware_revision": "24.05", 00:23:47.727 "model_number": "SPDK bdev Controller", 00:23:47.727 "multi_ctrlr": true, 00:23:47.727 "oacs": { 00:23:47.727 "firmware": 0, 00:23:47.727 "format": 0, 00:23:47.727 "ns_manage": 0, 00:23:47.727 "security": 0 00:23:47.727 }, 00:23:47.727 "serial_number": "00000000000000000000", 00:23:47.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.727 "vendor_id": "0x8086" 00:23:47.727 }, 00:23:47.727 "ns_data": { 00:23:47.727 "can_share": true, 00:23:47.727 "id": 1 00:23:47.727 }, 00:23:47.727 "trid": { 00:23:47.727 "adrfam": "IPv4", 00:23:47.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.727 "traddr": "10.0.0.2", 00:23:47.727 "trsvcid": "4420", 00:23:47.727 "trtype": "TCP" 00:23:47.727 }, 00:23:47.727 "vs": { 00:23:47.727 "nvme_version": "1.3" 00:23:47.727 } 00:23:47.727 } 00:23:47.727 ] 00:23:47.727 }, 00:23:47.727 "memory_domains": [ 00:23:47.727 { 00:23:47.727 "dma_device_id": "system", 00:23:47.727 "dma_device_type": 1 00:23:47.727 } 00:23:47.727 ], 00:23:47.727 "name": "nvme0n1", 00:23:47.727 "num_blocks": 2097152, 00:23:47.727 "product_name": "NVMe disk", 00:23:47.727 "supported_io_types": { 00:23:47.727 "abort": true, 00:23:47.727 "compare": true, 00:23:47.727 "compare_and_write": true, 00:23:47.727 "flush": true, 00:23:47.727 "nvme_admin": true, 00:23:47.727 "nvme_io": true, 00:23:47.727 "read": true, 00:23:47.727 "reset": true, 00:23:47.727 "unmap": false, 00:23:47.727 "write": true, 00:23:47.727 "write_zeroes": true 00:23:47.727 }, 00:23:47.727 "uuid": "668bb0ce-5d39-4fca-b0e1-fd81c935ff2f", 00:23:47.727 "zoned": false 00:23:47.727 } 00:23:47.727 ] 00:23:47.727 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.727 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:47.727 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.727 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.727 [2024-05-15 10:04:25.077111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:47.727 [2024-05-15 10:04:25.077228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bcf70 (9): Bad file descriptor 00:23:47.987 [2024-05-15 10:04:25.169311] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:47.987 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.987 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:47.987 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.987 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.987 [ 00:23:47.987 { 00:23:47.987 "aliases": [ 00:23:47.987 "668bb0ce-5d39-4fca-b0e1-fd81c935ff2f" 00:23:47.987 ], 00:23:47.987 "assigned_rate_limits": { 00:23:47.987 "r_mbytes_per_sec": 0, 00:23:47.987 "rw_ios_per_sec": 0, 00:23:47.987 "rw_mbytes_per_sec": 0, 00:23:47.987 "w_mbytes_per_sec": 0 00:23:47.987 }, 00:23:47.987 "block_size": 512, 00:23:47.987 "claimed": false, 00:23:47.987 "driver_specific": { 00:23:47.987 "mp_policy": "active_passive", 00:23:47.987 "nvme": [ 00:23:47.987 { 00:23:47.987 "ctrlr_data": { 00:23:47.987 "ana_reporting": false, 00:23:47.987 "cntlid": 2, 00:23:47.988 "firmware_revision": "24.05", 00:23:47.988 "model_number": "SPDK bdev Controller", 00:23:47.988 "multi_ctrlr": true, 00:23:47.988 "oacs": { 00:23:47.988 "firmware": 0, 00:23:47.988 "format": 0, 00:23:47.988 "ns_manage": 0, 00:23:47.988 "security": 0 00:23:47.988 }, 00:23:47.988 "serial_number": "00000000000000000000", 00:23:47.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.988 "vendor_id": "0x8086" 00:23:47.988 }, 00:23:47.988 "ns_data": { 00:23:47.988 "can_share": true, 00:23:47.988 "id": 1 00:23:47.988 }, 00:23:47.988 "trid": { 00:23:47.988 "adrfam": "IPv4", 00:23:47.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.988 "traddr": "10.0.0.2", 00:23:47.988 "trsvcid": "4420", 00:23:47.988 "trtype": "TCP" 00:23:47.988 }, 00:23:47.988 "vs": { 00:23:47.988 "nvme_version": "1.3" 00:23:47.988 } 00:23:47.988 } 00:23:47.988 ] 00:23:47.988 }, 00:23:47.988 "memory_domains": [ 00:23:47.988 { 00:23:47.988 "dma_device_id": "system", 00:23:47.988 "dma_device_type": 1 00:23:47.988 } 00:23:47.988 ], 00:23:47.988 "name": "nvme0n1", 00:23:47.988 "num_blocks": 2097152, 00:23:47.988 "product_name": "NVMe disk", 00:23:47.988 "supported_io_types": { 00:23:47.988 "abort": true, 00:23:47.988 "compare": true, 00:23:47.988 "compare_and_write": true, 00:23:47.988 "flush": true, 00:23:47.988 "nvme_admin": true, 00:23:47.988 "nvme_io": true, 00:23:47.988 "read": true, 00:23:47.988 "reset": true, 00:23:47.988 "unmap": false, 00:23:47.988 "write": true, 00:23:47.988 "write_zeroes": true 00:23:47.988 }, 00:23:47.988 "uuid": "668bb0ce-5d39-4fca-b0e1-fd81c935ff2f", 00:23:47.988 "zoned": false 00:23:47.988 } 00:23:47.988 ] 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zWVmhuA7uE 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zWVmhuA7uE 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.988 [2024-05-15 10:04:25.249252] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.988 [2024-05-15 10:04:25.249475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zWVmhuA7uE 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.988 [2024-05-15 10:04:25.261257] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.zWVmhuA7uE 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.988 [2024-05-15 10:04:25.273235] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:47.988 [2024-05-15 10:04:25.273326] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:47.988 nvme0n1 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:47.988 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.988 [ 00:23:47.988 { 00:23:47.988 "aliases": [ 00:23:47.988 "668bb0ce-5d39-4fca-b0e1-fd81c935ff2f" 00:23:47.988 ], 00:23:47.988 "assigned_rate_limits": { 00:23:47.988 "r_mbytes_per_sec": 0, 00:23:47.988 "rw_ios_per_sec": 0, 00:23:47.988 "rw_mbytes_per_sec": 0, 00:23:47.988 "w_mbytes_per_sec": 0 00:23:47.988 }, 00:23:47.988 "block_size": 512, 00:23:47.988 "claimed": false, 00:23:47.988 "driver_specific": { 00:23:47.988 "mp_policy": "active_passive", 00:23:47.988 "nvme": [ 00:23:47.988 { 00:23:47.988 "ctrlr_data": { 00:23:47.988 "ana_reporting": false, 00:23:47.988 "cntlid": 3, 00:23:47.988 "firmware_revision": "24.05", 00:23:47.988 "model_number": "SPDK bdev Controller", 00:23:47.988 "multi_ctrlr": true, 00:23:47.988 "oacs": { 00:23:47.988 "firmware": 0, 00:23:47.988 "format": 0, 00:23:47.988 "ns_manage": 0, 00:23:47.988 "security": 0 00:23:47.988 }, 00:23:47.988 "serial_number": "00000000000000000000", 00:23:47.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.988 "vendor_id": "0x8086" 00:23:47.988 }, 00:23:47.988 "ns_data": { 00:23:47.988 "can_share": true, 00:23:47.988 "id": 1 00:23:47.988 }, 00:23:47.988 "trid": { 00:23:47.988 "adrfam": "IPv4", 00:23:47.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:47.988 "traddr": "10.0.0.2", 00:23:47.988 "trsvcid": "4421", 00:23:47.988 "trtype": "TCP" 00:23:47.988 }, 00:23:47.988 "vs": { 00:23:47.988 "nvme_version": "1.3" 00:23:47.988 } 00:23:48.248 } 00:23:48.248 ] 00:23:48.248 }, 00:23:48.248 "memory_domains": [ 00:23:48.248 { 00:23:48.248 "dma_device_id": "system", 00:23:48.248 "dma_device_type": 1 00:23:48.248 } 00:23:48.248 ], 00:23:48.248 "name": "nvme0n1", 00:23:48.248 "num_blocks": 2097152, 00:23:48.248 "product_name": "NVMe disk", 00:23:48.248 "supported_io_types": { 00:23:48.248 "abort": true, 00:23:48.248 "compare": true, 00:23:48.248 "compare_and_write": true, 00:23:48.248 "flush": true, 00:23:48.248 "nvme_admin": true, 00:23:48.248 "nvme_io": true, 00:23:48.248 "read": true, 00:23:48.248 "reset": true, 00:23:48.248 "unmap": false, 00:23:48.248 "write": true, 00:23:48.248 "write_zeroes": true 00:23:48.248 }, 00:23:48.248 "uuid": "668bb0ce-5d39-4fca-b0e1-fd81c935ff2f", 00:23:48.248 "zoned": false 00:23:48.248 } 00:23:48.248 ] 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.zWVmhuA7uE 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.248 rmmod nvme_tcp 00:23:48.248 rmmod nvme_fabrics 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86153 ']' 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86153 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 86153 ']' 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 86153 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 86153 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:48.248 killing process with pid 86153 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 86153' 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 86153 00:23:48.248 [2024-05-15 10:04:25.527024] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:48.248 [2024-05-15 10:04:25.527065] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:48.248 [2024-05-15 10:04:25.527077] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:48.248 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 86153 00:23:48.506 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.506 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.506 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.506 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.506 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.506 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.506 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.506 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.764 10:04:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:48.764 00:23:48.764 real 0m2.825s 00:23:48.764 user 0m2.485s 00:23:48.764 sys 0m0.855s 00:23:48.764 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:48.764 10:04:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:48.764 ************************************ 00:23:48.764 END TEST nvmf_async_init 00:23:48.764 ************************************ 00:23:48.764 10:04:25 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:48.764 10:04:25 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:48.764 10:04:25 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:48.764 10:04:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:48.764 ************************************ 00:23:48.764 START TEST dma 00:23:48.764 ************************************ 00:23:48.764 10:04:25 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:48.764 * Looking for test storage... 00:23:48.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:48.764 10:04:26 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.764 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:48.764 10:04:26 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.764 10:04:26 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.764 10:04:26 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.764 10:04:26 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.764 10:04:26 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.764 10:04:26 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.764 10:04:26 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:48.764 10:04:26 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:48.765 10:04:26 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:48.765 10:04:26 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:48.765 10:04:26 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:48.765 00:23:48.765 real 0m0.132s 00:23:48.765 user 0m0.056s 00:23:48.765 sys 0m0.079s 00:23:48.765 10:04:26 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:48.765 10:04:26 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:48.765 ************************************ 00:23:48.765 END TEST dma 00:23:48.765 ************************************ 00:23:49.024 10:04:26 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:49.024 10:04:26 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:49.024 10:04:26 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:49.024 10:04:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.024 ************************************ 00:23:49.024 START TEST nvmf_identify 00:23:49.024 ************************************ 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:49.024 * Looking for test storage... 00:23:49.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:49.024 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:49.025 Cannot find device "nvmf_tgt_br" 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:49.025 Cannot find device "nvmf_tgt_br2" 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:49.025 Cannot find device "nvmf_tgt_br" 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:49.025 Cannot find device "nvmf_tgt_br2" 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:23:49.025 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:49.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:49.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:49.284 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:49.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:23:49.623 00:23:49.623 --- 10.0.0.2 ping statistics --- 00:23:49.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.623 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:49.623 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:49.623 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:23:49.623 00:23:49.623 --- 10.0.0.3 ping statistics --- 00:23:49.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.623 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:49.623 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:49.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:23:49.624 00:23:49.624 --- 10.0.0.1 ping statistics --- 00:23:49.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.624 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86427 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86427 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 86427 ']' 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.624 10:04:26 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:49.624 [2024-05-15 10:04:26.836909] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:49.624 [2024-05-15 10:04:26.838339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.882 [2024-05-15 10:04:27.013050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.882 [2024-05-15 10:04:27.188694] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.882 [2024-05-15 10:04:27.188761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.882 [2024-05-15 10:04:27.188777] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.882 [2024-05-15 10:04:27.188791] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.882 [2024-05-15 10:04:27.188802] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.882 [2024-05-15 10:04:27.188944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.882 [2024-05-15 10:04:27.189315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.882 [2024-05-15 10:04:27.190052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.882 [2024-05-15 10:04:27.190061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.817 [2024-05-15 10:04:27.933674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.817 10:04:27 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.817 Malloc0 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.817 [2024-05-15 10:04:28.045974] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:50.817 [2024-05-15 10:04:28.046542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.817 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:50.817 [ 00:23:50.817 { 00:23:50.817 "allow_any_host": true, 00:23:50.817 "hosts": [], 00:23:50.817 "listen_addresses": [ 00:23:50.817 { 00:23:50.817 "adrfam": "IPv4", 00:23:50.817 "traddr": "10.0.0.2", 00:23:50.817 "trsvcid": "4420", 00:23:50.817 "trtype": "TCP" 00:23:50.817 } 00:23:50.817 ], 00:23:50.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:50.817 "subtype": "Discovery" 00:23:50.817 }, 00:23:50.817 { 00:23:50.817 "allow_any_host": true, 00:23:50.817 "hosts": [], 00:23:50.817 "listen_addresses": [ 00:23:50.817 { 00:23:50.817 "adrfam": "IPv4", 00:23:50.817 "traddr": "10.0.0.2", 00:23:50.817 "trsvcid": "4420", 00:23:50.817 "trtype": "TCP" 00:23:50.817 } 00:23:50.817 ], 00:23:50.817 "max_cntlid": 65519, 00:23:50.817 "max_namespaces": 32, 00:23:50.817 "min_cntlid": 1, 00:23:50.817 "model_number": "SPDK bdev Controller", 00:23:50.817 "namespaces": [ 00:23:50.817 { 00:23:50.817 "bdev_name": "Malloc0", 00:23:50.817 "eui64": "ABCDEF0123456789", 00:23:50.817 "name": "Malloc0", 00:23:50.817 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:50.817 "nsid": 1, 00:23:50.817 "uuid": "548953aa-1133-4e0e-bbf6-181c2e2fc855" 00:23:50.817 } 00:23:50.818 ], 00:23:50.818 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.818 "serial_number": "SPDK00000000000001", 00:23:50.818 "subtype": "NVMe" 00:23:50.818 } 00:23:50.818 ] 00:23:50.818 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.818 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:50.818 [2024-05-15 10:04:28.105282] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:50.818 [2024-05-15 10:04:28.105338] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86486 ] 00:23:51.079 [2024-05-15 10:04:28.253552] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:51.079 [2024-05-15 10:04:28.253858] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:51.079 [2024-05-15 10:04:28.253924] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:51.079 [2024-05-15 10:04:28.253975] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:51.079 [2024-05-15 10:04:28.254041] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:51.079 [2024-05-15 10:04:28.254249] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:51.079 [2024-05-15 10:04:28.254496] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d74280 0 00:23:51.079 [2024-05-15 10:04:28.288121] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:51.079 [2024-05-15 10:04:28.288275] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:51.079 [2024-05-15 10:04:28.288340] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:51.079 [2024-05-15 10:04:28.288383] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:51.079 [2024-05-15 10:04:28.288506] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.288543] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.288598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.079 [2024-05-15 10:04:28.288641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:51.079 [2024-05-15 10:04:28.288807] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.079 [2024-05-15 10:04:28.314116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.079 [2024-05-15 10:04:28.314266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.079 [2024-05-15 10:04:28.314330] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.314417] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.079 [2024-05-15 10:04:28.314513] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:51.079 [2024-05-15 10:04:28.314573] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:51.079 [2024-05-15 10:04:28.314647] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:51.079 [2024-05-15 10:04:28.314719] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.314746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.314782] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.079 [2024-05-15 10:04:28.314816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-05-15 10:04:28.314933] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.079 [2024-05-15 10:04:28.315080] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.079 [2024-05-15 10:04:28.315131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.079 [2024-05-15 10:04:28.315220] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.315253] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.079 [2024-05-15 10:04:28.315316] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:51.079 [2024-05-15 10:04:28.315368] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:51.079 [2024-05-15 10:04:28.315422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.315525] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.315557] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.079 [2024-05-15 10:04:28.315625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-05-15 10:04:28.315699] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.079 [2024-05-15 10:04:28.315785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.079 [2024-05-15 10:04:28.315815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.079 [2024-05-15 10:04:28.315841] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.315936] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.079 [2024-05-15 10:04:28.315993] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:51.079 [2024-05-15 10:04:28.316052] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:51.079 [2024-05-15 10:04:28.316128] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.316208] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.316238] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.079 [2024-05-15 10:04:28.316267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-05-15 10:04:28.316341] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.079 [2024-05-15 10:04:28.316428] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.079 [2024-05-15 10:04:28.316455] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.079 [2024-05-15 10:04:28.316480] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.316504] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.079 [2024-05-15 10:04:28.316569] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:51.079 [2024-05-15 10:04:28.316619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.316645] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.316669] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.079 [2024-05-15 10:04:28.316742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-05-15 10:04:28.316810] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.079 [2024-05-15 10:04:28.316885] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.079 [2024-05-15 10:04:28.316912] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.079 [2024-05-15 10:04:28.316936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.316979] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.079 [2024-05-15 10:04:28.317042] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:51.079 [2024-05-15 10:04:28.317087] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:51.079 [2024-05-15 10:04:28.317215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:51.079 [2024-05-15 10:04:28.317403] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:51.079 [2024-05-15 10:04:28.317478] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:51.079 [2024-05-15 10:04:28.317549] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.317574] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.317611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.079 [2024-05-15 10:04:28.317638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.079 [2024-05-15 10:04:28.317764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.079 [2024-05-15 10:04:28.317850] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.079 [2024-05-15 10:04:28.317883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.079 [2024-05-15 10:04:28.317935] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.318003] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.079 [2024-05-15 10:04:28.318101] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:51.079 [2024-05-15 10:04:28.318216] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.079 [2024-05-15 10:04:28.318298] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.318328] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.318388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.080 [2024-05-15 10:04:28.318455] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.080 [2024-05-15 10:04:28.318555] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.080 [2024-05-15 10:04:28.318586] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.080 [2024-05-15 10:04:28.318611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.318635] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.080 [2024-05-15 10:04:28.318692] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:51.080 [2024-05-15 10:04:28.318738] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:51.080 [2024-05-15 10:04:28.318819] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:51.080 [2024-05-15 10:04:28.318884] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:51.080 [2024-05-15 10:04:28.318955] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.318993] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.319030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.080 [2024-05-15 10:04:28.319202] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.080 [2024-05-15 10:04:28.319362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.080 [2024-05-15 10:04:28.319423] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.080 [2024-05-15 10:04:28.319482] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.319514] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d74280): datao=0, datal=4096, cccid=0 00:23:51.080 [2024-05-15 10:04:28.319605] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbc950) on tqpair(0x1d74280): expected_datao=0, payload_size=4096 00:23:51.080 [2024-05-15 10:04:28.319687] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.319725] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.319793] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.319832] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.080 [2024-05-15 10:04:28.319861] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.080 [2024-05-15 10:04:28.319917] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.319949] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.080 [2024-05-15 10:04:28.320003] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:51.080 [2024-05-15 10:04:28.320086] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:51.080 [2024-05-15 10:04:28.320159] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:51.080 [2024-05-15 10:04:28.320226] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:51.080 [2024-05-15 10:04:28.320279] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:51.080 [2024-05-15 10:04:28.320324] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:51.080 [2024-05-15 10:04:28.320388] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:51.080 [2024-05-15 10:04:28.320452] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.320516] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.320547] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.320576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.080 [2024-05-15 10:04:28.320697] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.080 [2024-05-15 10:04:28.320791] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.080 [2024-05-15 10:04:28.320849] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.080 [2024-05-15 10:04:28.320914] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.320980] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbc950) on tqpair=0x1d74280 00:23:51.080 [2024-05-15 10:04:28.321063] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.321151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.321182] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.321262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.080 [2024-05-15 10:04:28.321313] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.321375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.321405] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.321432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.080 [2024-05-15 10:04:28.321533] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.321585] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.321614] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.321641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.080 [2024-05-15 10:04:28.321699] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.321738] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.321762] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.321788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.080 [2024-05-15 10:04:28.321851] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:51.080 [2024-05-15 10:04:28.321916] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:51.080 [2024-05-15 10:04:28.322010] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.322040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.322067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.080 [2024-05-15 10:04:28.322174] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbc950, cid 0, qid 0 00:23:51.080 [2024-05-15 10:04:28.322204] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcab0, cid 1, qid 0 00:23:51.080 [2024-05-15 10:04:28.322238] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcc10, cid 2, qid 0 00:23:51.080 [2024-05-15 10:04:28.322264] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.080 [2024-05-15 10:04:28.322299] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbced0, cid 4, qid 0 00:23:51.080 [2024-05-15 10:04:28.322358] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.080 [2024-05-15 10:04:28.322420] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.080 [2024-05-15 10:04:28.322450] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.322474] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbced0) on tqpair=0x1d74280 00:23:51.080 [2024-05-15 10:04:28.322537] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:51.080 [2024-05-15 10:04:28.322587] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:51.080 [2024-05-15 10:04:28.322640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.322665] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d74280) 00:23:51.080 [2024-05-15 10:04:28.322704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.080 [2024-05-15 10:04:28.322779] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbced0, cid 4, qid 0 00:23:51.080 [2024-05-15 10:04:28.322873] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.080 [2024-05-15 10:04:28.322900] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.080 [2024-05-15 10:04:28.322924] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.322982] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d74280): datao=0, datal=4096, cccid=4 00:23:51.080 [2024-05-15 10:04:28.323105] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbced0) on tqpair(0x1d74280): expected_datao=0, payload_size=4096 00:23:51.080 [2024-05-15 10:04:28.323192] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.080 [2024-05-15 10:04:28.323223] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.323276] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.323350] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.081 [2024-05-15 10:04:28.323381] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.081 [2024-05-15 10:04:28.323433] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.323502] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbced0) on tqpair=0x1d74280 00:23:51.081 [2024-05-15 10:04:28.323597] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:51.081 [2024-05-15 10:04:28.323818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.323881] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d74280) 00:23:51.081 [2024-05-15 10:04:28.323939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.081 [2024-05-15 10:04:28.324011] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.324048] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.324074] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d74280) 00:23:51.081 [2024-05-15 10:04:28.324112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.081 [2024-05-15 10:04:28.324217] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbced0, cid 4, qid 0 00:23:51.081 [2024-05-15 10:04:28.324246] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbd030, cid 5, qid 0 00:23:51.081 [2024-05-15 10:04:28.324368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.081 [2024-05-15 10:04:28.324396] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.081 [2024-05-15 10:04:28.324420] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.324460] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d74280): datao=0, datal=1024, cccid=4 00:23:51.081 [2024-05-15 10:04:28.324515] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbced0) on tqpair(0x1d74280): expected_datao=0, payload_size=1024 00:23:51.081 [2024-05-15 10:04:28.324558] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.324624] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.324654] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.324680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.081 [2024-05-15 10:04:28.324706] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.081 [2024-05-15 10:04:28.324752] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.324775] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbd030) on tqpair=0x1d74280 00:23:51.081 [2024-05-15 10:04:28.374147] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.081 [2024-05-15 10:04:28.374356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.081 [2024-05-15 10:04:28.374468] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.374512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbced0) on tqpair=0x1d74280 00:23:51.081 [2024-05-15 10:04:28.374691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.374758] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d74280) 00:23:51.081 [2024-05-15 10:04:28.374864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.081 [2024-05-15 10:04:28.374984] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbced0, cid 4, qid 0 00:23:51.081 [2024-05-15 10:04:28.375145] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.081 [2024-05-15 10:04:28.375259] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.081 [2024-05-15 10:04:28.375322] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.375384] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d74280): datao=0, datal=3072, cccid=4 00:23:51.081 [2024-05-15 10:04:28.375468] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbced0) on tqpair(0x1d74280): expected_datao=0, payload_size=3072 00:23:51.081 [2024-05-15 10:04:28.375538] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.375571] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.375598] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.375630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.081 [2024-05-15 10:04:28.375678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.081 [2024-05-15 10:04:28.375705] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.375731] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbced0) on tqpair=0x1d74280 00:23:51.081 [2024-05-15 10:04:28.375790] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.375822] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d74280) 00:23:51.081 [2024-05-15 10:04:28.375853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.081 [2024-05-15 10:04:28.375926] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbced0, cid 4, qid 0 00:23:51.081 [2024-05-15 10:04:28.376040] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.081 [2024-05-15 10:04:28.376074] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.081 [2024-05-15 10:04:28.376182] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.376213] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d74280): datao=0, datal=8, cccid=4 00:23:51.081 [2024-05-15 10:04:28.376292] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dbced0) on tqpair(0x1d74280): expected_datao=0, payload_size=8 00:23:51.081 [2024-05-15 10:04:28.376425] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.376526] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.376557] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.425572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.081 [2024-05-15 10:04:28.425761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.081 [2024-05-15 10:04:28.425827] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.081 [2024-05-15 10:04:28.425860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbced0) on tqpair=0x1d74280 00:23:51.081 ===================================================== 00:23:51.081 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:51.081 ===================================================== 00:23:51.081 Controller Capabilities/Features 00:23:51.081 ================================ 00:23:51.081 Vendor ID: 0000 00:23:51.081 Subsystem Vendor ID: 0000 00:23:51.081 Serial Number: .................... 00:23:51.081 Model Number: ........................................ 00:23:51.081 Firmware Version: 24.05 00:23:51.081 Recommended Arb Burst: 0 00:23:51.081 IEEE OUI Identifier: 00 00 00 00:23:51.081 Multi-path I/O 00:23:51.081 May have multiple subsystem ports: No 00:23:51.081 May have multiple controllers: No 00:23:51.081 Associated with SR-IOV VF: No 00:23:51.081 Max Data Transfer Size: 131072 00:23:51.081 Max Number of Namespaces: 0 00:23:51.081 Max Number of I/O Queues: 1024 00:23:51.081 NVMe Specification Version (VS): 1.3 00:23:51.081 NVMe Specification Version (Identify): 1.3 00:23:51.081 Maximum Queue Entries: 128 00:23:51.081 Contiguous Queues Required: Yes 00:23:51.081 Arbitration Mechanisms Supported 00:23:51.081 Weighted Round Robin: Not Supported 00:23:51.081 Vendor Specific: Not Supported 00:23:51.081 Reset Timeout: 15000 ms 00:23:51.081 Doorbell Stride: 4 bytes 00:23:51.081 NVM Subsystem Reset: Not Supported 00:23:51.081 Command Sets Supported 00:23:51.081 NVM Command Set: Supported 00:23:51.081 Boot Partition: Not Supported 00:23:51.081 Memory Page Size Minimum: 4096 bytes 00:23:51.081 Memory Page Size Maximum: 4096 bytes 00:23:51.081 Persistent Memory Region: Not Supported 00:23:51.081 Optional Asynchronous Events Supported 00:23:51.081 Namespace Attribute Notices: Not Supported 00:23:51.081 Firmware Activation Notices: Not Supported 00:23:51.081 ANA Change Notices: Not Supported 00:23:51.081 PLE Aggregate Log Change Notices: Not Supported 00:23:51.081 LBA Status Info Alert Notices: Not Supported 00:23:51.081 EGE Aggregate Log Change Notices: Not Supported 00:23:51.081 Normal NVM Subsystem Shutdown event: Not Supported 00:23:51.081 Zone Descriptor Change Notices: Not Supported 00:23:51.081 Discovery Log Change Notices: Supported 00:23:51.081 Controller Attributes 00:23:51.081 128-bit Host Identifier: Not Supported 00:23:51.081 Non-Operational Permissive Mode: Not Supported 00:23:51.081 NVM Sets: Not Supported 00:23:51.081 Read Recovery Levels: Not Supported 00:23:51.081 Endurance Groups: Not Supported 00:23:51.081 Predictable Latency Mode: Not Supported 00:23:51.081 Traffic Based Keep ALive: Not Supported 00:23:51.081 Namespace Granularity: Not Supported 00:23:51.081 SQ Associations: Not Supported 00:23:51.081 UUID List: Not Supported 00:23:51.081 Multi-Domain Subsystem: Not Supported 00:23:51.081 Fixed Capacity Management: Not Supported 00:23:51.081 Variable Capacity Management: Not Supported 00:23:51.081 Delete Endurance Group: Not Supported 00:23:51.081 Delete NVM Set: Not Supported 00:23:51.081 Extended LBA Formats Supported: Not Supported 00:23:51.081 Flexible Data Placement Supported: Not Supported 00:23:51.081 00:23:51.081 Controller Memory Buffer Support 00:23:51.081 ================================ 00:23:51.081 Supported: No 00:23:51.081 00:23:51.081 Persistent Memory Region Support 00:23:51.081 ================================ 00:23:51.081 Supported: No 00:23:51.081 00:23:51.081 Admin Command Set Attributes 00:23:51.081 ============================ 00:23:51.082 Security Send/Receive: Not Supported 00:23:51.082 Format NVM: Not Supported 00:23:51.082 Firmware Activate/Download: Not Supported 00:23:51.082 Namespace Management: Not Supported 00:23:51.082 Device Self-Test: Not Supported 00:23:51.082 Directives: Not Supported 00:23:51.082 NVMe-MI: Not Supported 00:23:51.082 Virtualization Management: Not Supported 00:23:51.082 Doorbell Buffer Config: Not Supported 00:23:51.082 Get LBA Status Capability: Not Supported 00:23:51.082 Command & Feature Lockdown Capability: Not Supported 00:23:51.082 Abort Command Limit: 1 00:23:51.082 Async Event Request Limit: 4 00:23:51.082 Number of Firmware Slots: N/A 00:23:51.082 Firmware Slot 1 Read-Only: N/A 00:23:51.082 Firmware Activation Without Reset: N/A 00:23:51.082 Multiple Update Detection Support: N/A 00:23:51.082 Firmware Update Granularity: No Information Provided 00:23:51.082 Per-Namespace SMART Log: No 00:23:51.082 Asymmetric Namespace Access Log Page: Not Supported 00:23:51.082 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:51.082 Command Effects Log Page: Not Supported 00:23:51.082 Get Log Page Extended Data: Supported 00:23:51.082 Telemetry Log Pages: Not Supported 00:23:51.082 Persistent Event Log Pages: Not Supported 00:23:51.082 Supported Log Pages Log Page: May Support 00:23:51.082 Commands Supported & Effects Log Page: Not Supported 00:23:51.082 Feature Identifiers & Effects Log Page:May Support 00:23:51.082 NVMe-MI Commands & Effects Log Page: May Support 00:23:51.082 Data Area 4 for Telemetry Log: Not Supported 00:23:51.082 Error Log Page Entries Supported: 128 00:23:51.082 Keep Alive: Not Supported 00:23:51.082 00:23:51.082 NVM Command Set Attributes 00:23:51.082 ========================== 00:23:51.082 Submission Queue Entry Size 00:23:51.082 Max: 1 00:23:51.082 Min: 1 00:23:51.082 Completion Queue Entry Size 00:23:51.082 Max: 1 00:23:51.082 Min: 1 00:23:51.082 Number of Namespaces: 0 00:23:51.082 Compare Command: Not Supported 00:23:51.082 Write Uncorrectable Command: Not Supported 00:23:51.082 Dataset Management Command: Not Supported 00:23:51.082 Write Zeroes Command: Not Supported 00:23:51.082 Set Features Save Field: Not Supported 00:23:51.082 Reservations: Not Supported 00:23:51.082 Timestamp: Not Supported 00:23:51.082 Copy: Not Supported 00:23:51.082 Volatile Write Cache: Not Present 00:23:51.082 Atomic Write Unit (Normal): 1 00:23:51.082 Atomic Write Unit (PFail): 1 00:23:51.082 Atomic Compare & Write Unit: 1 00:23:51.082 Fused Compare & Write: Supported 00:23:51.082 Scatter-Gather List 00:23:51.082 SGL Command Set: Supported 00:23:51.082 SGL Keyed: Supported 00:23:51.082 SGL Bit Bucket Descriptor: Not Supported 00:23:51.082 SGL Metadata Pointer: Not Supported 00:23:51.082 Oversized SGL: Not Supported 00:23:51.082 SGL Metadata Address: Not Supported 00:23:51.082 SGL Offset: Supported 00:23:51.082 Transport SGL Data Block: Not Supported 00:23:51.082 Replay Protected Memory Block: Not Supported 00:23:51.082 00:23:51.082 Firmware Slot Information 00:23:51.082 ========================= 00:23:51.082 Active slot: 0 00:23:51.082 00:23:51.082 00:23:51.082 Error Log 00:23:51.082 ========= 00:23:51.082 00:23:51.082 Active Namespaces 00:23:51.082 ================= 00:23:51.082 Discovery Log Page 00:23:51.082 ================== 00:23:51.082 Generation Counter: 2 00:23:51.082 Number of Records: 2 00:23:51.082 Record Format: 0 00:23:51.082 00:23:51.082 Discovery Log Entry 0 00:23:51.082 ---------------------- 00:23:51.082 Transport Type: 3 (TCP) 00:23:51.082 Address Family: 1 (IPv4) 00:23:51.082 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:51.082 Entry Flags: 00:23:51.082 Duplicate Returned Information: 1 00:23:51.082 Explicit Persistent Connection Support for Discovery: 1 00:23:51.082 Transport Requirements: 00:23:51.082 Secure Channel: Not Required 00:23:51.082 Port ID: 0 (0x0000) 00:23:51.082 Controller ID: 65535 (0xffff) 00:23:51.082 Admin Max SQ Size: 128 00:23:51.082 Transport Service Identifier: 4420 00:23:51.082 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:51.082 Transport Address: 10.0.0.2 00:23:51.082 Discovery Log Entry 1 00:23:51.082 ---------------------- 00:23:51.082 Transport Type: 3 (TCP) 00:23:51.082 Address Family: 1 (IPv4) 00:23:51.082 Subsystem Type: 2 (NVM Subsystem) 00:23:51.082 Entry Flags: 00:23:51.082 Duplicate Returned Information: 0 00:23:51.082 Explicit Persistent Connection Support for Discovery: 0 00:23:51.082 Transport Requirements: 00:23:51.082 Secure Channel: Not Required 00:23:51.082 Port ID: 0 (0x0000) 00:23:51.082 Controller ID: 65535 (0xffff) 00:23:51.082 Admin Max SQ Size: 128 00:23:51.082 Transport Service Identifier: 4420 00:23:51.082 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:51.082 Transport Address: 10.0.0.2 [2024-05-15 10:04:28.426343] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:51.082 [2024-05-15 10:04:28.426454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.082 [2024-05-15 10:04:28.426548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.082 [2024-05-15 10:04:28.426673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.082 [2024-05-15 10:04:28.426809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.082 [2024-05-15 10:04:28.426918] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.082 [2024-05-15 10:04:28.426976] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.082 [2024-05-15 10:04:28.427006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.082 [2024-05-15 10:04:28.427105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.082 [2024-05-15 10:04:28.427234] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.082 [2024-05-15 10:04:28.427371] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.082 [2024-05-15 10:04:28.427443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.082 [2024-05-15 10:04:28.427526] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.082 [2024-05-15 10:04:28.427557] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbcd70) on tqpair=0x1d74280 00:23:51.082 [2024-05-15 10:04:28.427609] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.082 [2024-05-15 10:04:28.427634] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.082 [2024-05-15 10:04:28.427658] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.082 [2024-05-15 10:04:28.427732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.082 [2024-05-15 10:04:28.427841] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.083 [2024-05-15 10:04:28.427964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.083 [2024-05-15 10:04:28.427992] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.083 [2024-05-15 10:04:28.428048] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.428166] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbcd70) on tqpair=0x1d74280 00:23:51.083 [2024-05-15 10:04:28.428327] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:51.083 [2024-05-15 10:04:28.428478] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:51.083 [2024-05-15 10:04:28.428631] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.428690] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.428776] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.083 [2024-05-15 10:04:28.428860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.083 [2024-05-15 10:04:28.428961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.083 [2024-05-15 10:04:28.429055] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.083 [2024-05-15 10:04:28.429126] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.083 [2024-05-15 10:04:28.429184] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.429242] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbcd70) on tqpair=0x1d74280 00:23:51.083 [2024-05-15 10:04:28.429347] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.429475] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.429505] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.083 [2024-05-15 10:04:28.429535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.083 [2024-05-15 10:04:28.429612] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.083 [2024-05-15 10:04:28.429704] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.083 [2024-05-15 10:04:28.429732] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.083 [2024-05-15 10:04:28.429946] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.429978] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbcd70) on tqpair=0x1d74280 00:23:51.083 [2024-05-15 10:04:28.430132] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.430197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.430229] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.083 [2024-05-15 10:04:28.430294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.083 [2024-05-15 10:04:28.430365] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.083 [2024-05-15 10:04:28.430450] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.083 [2024-05-15 10:04:28.430513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.083 [2024-05-15 10:04:28.430545] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.430611] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbcd70) on tqpair=0x1d74280 00:23:51.083 [2024-05-15 10:04:28.430672] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.430735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.430766] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.083 [2024-05-15 10:04:28.430796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.083 [2024-05-15 10:04:28.430943] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.083 [2024-05-15 10:04:28.431048] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.083 [2024-05-15 10:04:28.431154] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.083 [2024-05-15 10:04:28.431213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.431243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbcd70) on tqpair=0x1d74280 00:23:51.083 [2024-05-15 10:04:28.431353] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.431410] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.431441] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.083 [2024-05-15 10:04:28.431522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.083 [2024-05-15 10:04:28.431588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.083 [2024-05-15 10:04:28.431665] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.083 [2024-05-15 10:04:28.431693] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.083 [2024-05-15 10:04:28.439641] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.439764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbcd70) on tqpair=0x1d74280 00:23:51.083 [2024-05-15 10:04:28.439894] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.439958] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.439989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d74280) 00:23:51.083 [2024-05-15 10:04:28.440069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.083 [2024-05-15 10:04:28.440173] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dbcd70, cid 3, qid 0 00:23:51.083 [2024-05-15 10:04:28.440278] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.083 [2024-05-15 10:04:28.440323] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.083 [2024-05-15 10:04:28.440385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.083 [2024-05-15 10:04:28.440442] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1dbcd70) on tqpair=0x1d74280 00:23:51.083 [2024-05-15 10:04:28.440557] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 11 milliseconds 00:23:51.083 00:23:51.083 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:51.346 [2024-05-15 10:04:28.479901] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:51.346 [2024-05-15 10:04:28.479987] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86489 ] 00:23:51.346 [2024-05-15 10:04:28.634760] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:51.346 [2024-05-15 10:04:28.640705] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:51.346 [2024-05-15 10:04:28.640791] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:51.346 [2024-05-15 10:04:28.640846] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:51.346 [2024-05-15 10:04:28.640926] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:51.346 [2024-05-15 10:04:28.641121] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:51.346 [2024-05-15 10:04:28.641308] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x224a280 0 00:23:51.346 [2024-05-15 10:04:28.669128] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:51.346 [2024-05-15 10:04:28.669276] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:51.346 [2024-05-15 10:04:28.669341] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:51.346 [2024-05-15 10:04:28.669405] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:51.346 [2024-05-15 10:04:28.669503] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.346 [2024-05-15 10:04:28.669533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.346 [2024-05-15 10:04:28.669602] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.346 [2024-05-15 10:04:28.669676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:51.346 [2024-05-15 10:04:28.669827] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.346 [2024-05-15 10:04:28.694288] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.346 [2024-05-15 10:04:28.694483] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.346 [2024-05-15 10:04:28.694550] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.346 [2024-05-15 10:04:28.694629] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.346 [2024-05-15 10:04:28.694739] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:51.346 [2024-05-15 10:04:28.694816] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:51.346 [2024-05-15 10:04:28.694905] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:51.346 [2024-05-15 10:04:28.695008] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.695064] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.695090] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.347 [2024-05-15 10:04:28.695189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.347 [2024-05-15 10:04:28.695297] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.347 [2024-05-15 10:04:28.695393] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.347 [2024-05-15 10:04:28.695424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.347 [2024-05-15 10:04:28.695458] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.695484] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.347 [2024-05-15 10:04:28.695556] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:51.347 [2024-05-15 10:04:28.695623] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:51.347 [2024-05-15 10:04:28.695673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.695708] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.695734] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.347 [2024-05-15 10:04:28.695778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.347 [2024-05-15 10:04:28.695840] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.347 [2024-05-15 10:04:28.695934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.347 [2024-05-15 10:04:28.695969] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.347 [2024-05-15 10:04:28.696043] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.696075] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.347 [2024-05-15 10:04:28.696195] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:51.347 [2024-05-15 10:04:28.696285] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:51.347 [2024-05-15 10:04:28.696338] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.696391] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.696446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.347 [2024-05-15 10:04:28.696524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.347 [2024-05-15 10:04:28.696616] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.347 [2024-05-15 10:04:28.696677] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.347 [2024-05-15 10:04:28.696760] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.347 [2024-05-15 10:04:28.696790] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.696814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.347 [2024-05-15 10:04:28.696881] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:51.347 [2024-05-15 10:04:28.697070] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.697138] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.697168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.347 [2024-05-15 10:04:28.697238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.347 [2024-05-15 10:04:28.697327] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.347 [2024-05-15 10:04:28.697387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.347 [2024-05-15 10:04:28.697438] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.347 [2024-05-15 10:04:28.697463] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.697490] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.347 [2024-05-15 10:04:28.697535] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:51.347 [2024-05-15 10:04:28.697602] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:51.347 [2024-05-15 10:04:28.697732] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:51.347 [2024-05-15 10:04:28.697942] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:51.347 [2024-05-15 10:04:28.697999] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:51.347 [2024-05-15 10:04:28.698084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.698152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.698182] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.347 [2024-05-15 10:04:28.698233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.347 [2024-05-15 10:04:28.698302] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.347 [2024-05-15 10:04:28.698367] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.347 [2024-05-15 10:04:28.711120] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.347 [2024-05-15 10:04:28.711232] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.711266] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.347 [2024-05-15 10:04:28.711350] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:51.347 [2024-05-15 10:04:28.711409] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.711437] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.711462] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.347 [2024-05-15 10:04:28.711495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.347 [2024-05-15 10:04:28.711562] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.347 [2024-05-15 10:04:28.711633] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.347 [2024-05-15 10:04:28.711662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.347 [2024-05-15 10:04:28.711687] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.711722] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.347 [2024-05-15 10:04:28.711788] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:51.347 [2024-05-15 10:04:28.711835] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:51.347 [2024-05-15 10:04:28.711905] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:51.347 [2024-05-15 10:04:28.711982] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:51.347 [2024-05-15 10:04:28.712152] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.712228] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.347 [2024-05-15 10:04:28.712262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.347 [2024-05-15 10:04:28.712358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.347 [2024-05-15 10:04:28.712472] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.347 [2024-05-15 10:04:28.712500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.347 [2024-05-15 10:04:28.712574] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.712605] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x224a280): datao=0, datal=4096, cccid=0 00:23:51.347 [2024-05-15 10:04:28.712698] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292950) on tqpair(0x224a280): expected_datao=0, payload_size=4096 00:23:51.347 [2024-05-15 10:04:28.712781] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.712817] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.347 [2024-05-15 10:04:28.712843] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.712897] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.348 [2024-05-15 10:04:28.712923] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.348 [2024-05-15 10:04:28.712948] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.712972] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.348 [2024-05-15 10:04:28.713047] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:51.348 [2024-05-15 10:04:28.713112] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:51.348 [2024-05-15 10:04:28.713199] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:51.348 [2024-05-15 10:04:28.713230] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:51.348 [2024-05-15 10:04:28.713313] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:51.348 [2024-05-15 10:04:28.713413] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.713487] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.713552] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.713578] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.713638] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.713673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.348 [2024-05-15 10:04:28.713764] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.348 [2024-05-15 10:04:28.713826] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.348 [2024-05-15 10:04:28.713853] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.348 [2024-05-15 10:04:28.713877] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.713915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292950) on tqpair=0x224a280 00:23:51.348 [2024-05-15 10:04:28.713976] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714000] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714024] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.714059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.348 [2024-05-15 10:04:28.714119] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.714264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.348 [2024-05-15 10:04:28.714315] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714352] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714376] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.714417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.348 [2024-05-15 10:04:28.714461] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714503] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714527] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.714564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.348 [2024-05-15 10:04:28.714607] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.714688] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.714736] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.714760] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.714832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.348 [2024-05-15 10:04:28.714902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292950, cid 0, qid 0 00:23:51.348 [2024-05-15 10:04:28.714945] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292ab0, cid 1, qid 0 00:23:51.348 [2024-05-15 10:04:28.715000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292c10, cid 2, qid 0 00:23:51.348 [2024-05-15 10:04:28.715040] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.348 [2024-05-15 10:04:28.715118] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292ed0, cid 4, qid 0 00:23:51.348 [2024-05-15 10:04:28.715155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.348 [2024-05-15 10:04:28.715182] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.348 [2024-05-15 10:04:28.715241] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.715271] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292ed0) on tqpair=0x224a280 00:23:51.348 [2024-05-15 10:04:28.715317] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:51.348 [2024-05-15 10:04:28.715396] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.715454] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.715532] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.715584] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.715609] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.715633] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.715670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:51.348 [2024-05-15 10:04:28.715733] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292ed0, cid 4, qid 0 00:23:51.348 [2024-05-15 10:04:28.715804] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.348 [2024-05-15 10:04:28.715831] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.348 [2024-05-15 10:04:28.715885] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.715915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292ed0) on tqpair=0x224a280 00:23:51.348 [2024-05-15 10:04:28.716073] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.716171] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.716237] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.716262] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.716291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.348 [2024-05-15 10:04:28.716353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292ed0, cid 4, qid 0 00:23:51.348 [2024-05-15 10:04:28.716418] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.348 [2024-05-15 10:04:28.716445] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.348 [2024-05-15 10:04:28.716469] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.716543] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x224a280): datao=0, datal=4096, cccid=4 00:23:51.348 [2024-05-15 10:04:28.716593] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292ed0) on tqpair(0x224a280): expected_datao=0, payload_size=4096 00:23:51.348 [2024-05-15 10:04:28.716656] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.716685] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.716716] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.716745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.348 [2024-05-15 10:04:28.716772] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.348 [2024-05-15 10:04:28.716858] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.716888] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292ed0) on tqpair=0x224a280 00:23:51.348 [2024-05-15 10:04:28.716968] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:51.348 [2024-05-15 10:04:28.717024] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.717074] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:51.348 [2024-05-15 10:04:28.717199] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.717263] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x224a280) 00:23:51.348 [2024-05-15 10:04:28.717297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.348 [2024-05-15 10:04:28.717438] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292ed0, cid 4, qid 0 00:23:51.348 [2024-05-15 10:04:28.717519] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.348 [2024-05-15 10:04:28.717551] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.348 [2024-05-15 10:04:28.717618] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.348 [2024-05-15 10:04:28.717648] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x224a280): datao=0, datal=4096, cccid=4 00:23:51.348 [2024-05-15 10:04:28.717759] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292ed0) on tqpair(0x224a280): expected_datao=0, payload_size=4096 00:23:51.348 [2024-05-15 10:04:28.717846] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.717882] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.717909] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.717971] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.349 [2024-05-15 10:04:28.718048] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.349 [2024-05-15 10:04:28.718117] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.718150] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292ed0) on tqpair=0x224a280 00:23:51.349 [2024-05-15 10:04:28.718308] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:51.349 [2024-05-15 10:04:28.718438] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:51.349 [2024-05-15 10:04:28.718553] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.718582] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.718648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.718722] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292ed0, cid 4, qid 0 00:23:51.349 [2024-05-15 10:04:28.718800] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.349 [2024-05-15 10:04:28.718834] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.349 [2024-05-15 10:04:28.718860] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.718886] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x224a280): datao=0, datal=4096, cccid=4 00:23:51.349 [2024-05-15 10:04:28.718941] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292ed0) on tqpair(0x224a280): expected_datao=0, payload_size=4096 00:23:51.349 [2024-05-15 10:04:28.718992] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.719032] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.719112] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.719152] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.349 [2024-05-15 10:04:28.719220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.349 [2024-05-15 10:04:28.719281] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.719338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292ed0) on tqpair=0x224a280 00:23:51.349 [2024-05-15 10:04:28.719398] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:51.349 [2024-05-15 10:04:28.719494] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:51.349 [2024-05-15 10:04:28.719605] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:51.349 [2024-05-15 10:04:28.719699] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:51.349 [2024-05-15 10:04:28.719775] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:51.349 [2024-05-15 10:04:28.719828] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:51.349 [2024-05-15 10:04:28.719919] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:51.349 [2024-05-15 10:04:28.720027] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:51.349 [2024-05-15 10:04:28.720171] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.720237] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.720313] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.720399] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.720428] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.720479] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.720511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:51.349 [2024-05-15 10:04:28.720634] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292ed0, cid 4, qid 0 00:23:51.349 [2024-05-15 10:04:28.720711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293030, cid 5, qid 0 00:23:51.349 [2024-05-15 10:04:28.720751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.349 [2024-05-15 10:04:28.720852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.349 [2024-05-15 10:04:28.720884] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.720948] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292ed0) on tqpair=0x224a280 00:23:51.349 [2024-05-15 10:04:28.721005] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.349 [2024-05-15 10:04:28.721062] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.349 [2024-05-15 10:04:28.721126] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.721159] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2293030) on tqpair=0x224a280 00:23:51.349 [2024-05-15 10:04:28.721254] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.721283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.721340] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.721441] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293030, cid 5, qid 0 00:23:51.349 [2024-05-15 10:04:28.721516] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.349 [2024-05-15 10:04:28.721550] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.349 [2024-05-15 10:04:28.721611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.721643] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2293030) on tqpair=0x224a280 00:23:51.349 [2024-05-15 10:04:28.721745] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.721809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.721844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.721969] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293030, cid 5, qid 0 00:23:51.349 [2024-05-15 10:04:28.722045] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.349 [2024-05-15 10:04:28.722130] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.349 [2024-05-15 10:04:28.722190] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.722249] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2293030) on tqpair=0x224a280 00:23:51.349 [2024-05-15 10:04:28.722366] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.722426] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.722461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.722545] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293030, cid 5, qid 0 00:23:51.349 [2024-05-15 10:04:28.722611] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.349 [2024-05-15 10:04:28.722646] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.349 [2024-05-15 10:04:28.722672] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.722711] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2293030) on tqpair=0x224a280 00:23:51.349 [2024-05-15 10:04:28.722770] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.722798] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.722827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.722876] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.722902] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.722946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.723023] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.723064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.723151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.723231] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.349 [2024-05-15 10:04:28.723287] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x224a280) 00:23:51.349 [2024-05-15 10:04:28.723321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.349 [2024-05-15 10:04:28.723402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293030, cid 5, qid 0 00:23:51.350 [2024-05-15 10:04:28.723431] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292ed0, cid 4, qid 0 00:23:51.350 [2024-05-15 10:04:28.723458] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293190, cid 6, qid 0 00:23:51.350 [2024-05-15 10:04:28.723484] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22932f0, cid 7, qid 0 00:23:51.350 [2024-05-15 10:04:28.723591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.350 [2024-05-15 10:04:28.723625] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.350 [2024-05-15 10:04:28.723698] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.723730] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x224a280): datao=0, datal=8192, cccid=5 00:23:51.350 [2024-05-15 10:04:28.723776] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293030) on tqpair(0x224a280): expected_datao=0, payload_size=8192 00:23:51.350 [2024-05-15 10:04:28.723842] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.723874] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.723900] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.723927] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.350 [2024-05-15 10:04:28.723975] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.350 [2024-05-15 10:04:28.724000] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724025] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x224a280): datao=0, datal=512, cccid=4 00:23:51.350 [2024-05-15 10:04:28.724071] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2292ed0) on tqpair(0x224a280): expected_datao=0, payload_size=512 00:23:51.350 [2024-05-15 10:04:28.724146] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724286] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724318] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724374] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.350 [2024-05-15 10:04:28.724407] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.350 [2024-05-15 10:04:28.724461] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724491] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x224a280): datao=0, datal=512, cccid=6 00:23:51.350 [2024-05-15 10:04:28.724573] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293190) on tqpair(0x224a280): expected_datao=0, payload_size=512 00:23:51.350 [2024-05-15 10:04:28.724647] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724681] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724716] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724744] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:51.350 [2024-05-15 10:04:28.724772] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:51.350 [2024-05-15 10:04:28.724797] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:51.350 [2024-05-15 10:04:28.724832] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x224a280): datao=0, datal=4096, cccid=7 00:23:51.350 [2024-05-15 10:04:28.724879] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22932f0) on tqpair(0x224a280): exp===================================================== 00:23:51.350 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.350 ===================================================== 00:23:51.350 Controller Capabilities/Features 00:23:51.350 ================================ 00:23:51.350 Vendor ID: 8086 00:23:51.350 Subsystem Vendor ID: 8086 00:23:51.350 Serial Number: SPDK00000000000001 00:23:51.350 Model Number: SPDK bdev Controller 00:23:51.350 Firmware Version: 24.05 00:23:51.350 Recommended Arb Burst: 6 00:23:51.350 IEEE OUI Identifier: e4 d2 5c 00:23:51.350 Multi-path I/O 00:23:51.350 May have multiple subsystem ports: Yes 00:23:51.350 May have multiple controllers: Yes 00:23:51.350 Associated with SR-IOV VF: No 00:23:51.350 Max Data Transfer Size: 131072 00:23:51.350 Max Number of Namespaces: 32 00:23:51.350 Max Number of I/O Queues: 127 00:23:51.350 NVMe Specification Version (VS): 1.3 00:23:51.350 NVMe Specification Version (Identify): 1.3 00:23:51.350 Maximum Queue Entries: 128 00:23:51.350 Contiguous Queues Required: Yes 00:23:51.350 Arbitration Mechanisms Supported 00:23:51.350 Weighted Round Robin: Not Supported 00:23:51.350 Vendor Specific: Not Supported 00:23:51.350 Reset Timeout: 15000 ms 00:23:51.350 Doorbell Stride: 4 bytes 00:23:51.350 NVM Subsystem Reset: Not Supported 00:23:51.350 Command Sets Supported 00:23:51.350 NVM Command Set: Supported 00:23:51.350 Boot Partition: Not Supported 00:23:51.350 Memory Page Size Minimum: 4096 bytes 00:23:51.350 Memory Page Size Maximum: 4096 bytes 00:23:51.350 Persistent Memory Region: Not Supported 00:23:51.350 Optional Asynchronous Events Supported 00:23:51.350 Namespace Attribute Notices: Supported 00:23:51.350 Firmware Activation Notices: Not Supported 00:23:51.350 ANA Change Notices: Not Supported 00:23:51.350 PLE Aggregate Log Change Notices: Not Supported 00:23:51.350 LBA Status Info Alert Notices: Not Supported 00:23:51.350 EGE Aggregate Log Change Notices: Not Supported 00:23:51.350 Normal NVM Subsystem Shutdown event: Not Supported 00:23:51.350 Zone Descriptor Change Notices: Not Supported 00:23:51.350 Discovery Log Change Notices: Not Supported 00:23:51.350 Controller Attributes 00:23:51.350 128-bit Host Identifier: Supported 00:23:51.350 Non-Operational Permissive Mode: Not Supported 00:23:51.350 NVM Sets: Not Supported 00:23:51.350 Read Recovery Levels: Not Supported 00:23:51.350 Endurance Groups: Not Supported 00:23:51.350 Predictable Latency Mode: Not Supported 00:23:51.350 Traffic Based Keep ALive: Not Supported 00:23:51.350 Namespace Granularity: Not Supported 00:23:51.350 SQ Associations: Not Supported 00:23:51.350 UUID List: Not Supported 00:23:51.350 Multi-Domain Subsystem: Not Supported 00:23:51.350 Fixed Capacity Management: Not Supported 00:23:51.350 Variable Capacity Management: Not Supported 00:23:51.350 Delete Endurance Group: Not Supported 00:23:51.350 Delete NVM Set: Not Supported 00:23:51.350 Extended LBA Formats Supported: Not Supported 00:23:51.350 Flexible Data Placement Supported: Not Supported 00:23:51.350 00:23:51.350 Controller Memory Buffer Support 00:23:51.350 ================================ 00:23:51.350 Supported: No 00:23:51.350 00:23:51.350 Persistent Memory Region Support 00:23:51.350 ================================ 00:23:51.350 Supported: No 00:23:51.350 00:23:51.350 Admin Command Set Attributes 00:23:51.350 ============================ 00:23:51.350 Security Send/Receive: Not Supported 00:23:51.350 Format NVM: Not Supported 00:23:51.350 Firmware Activate/Download: Not Supported 00:23:51.350 Namespace Management: Not Supported 00:23:51.350 Device Self-Test: Not Supported 00:23:51.350 Directives: Not Supported 00:23:51.350 NVMe-MI: Not Supported 00:23:51.350 Virtualization Management: Not Supported 00:23:51.350 Doorbell Buffer Config: Not Supported 00:23:51.350 Get LBA Status Capability: Not Supported 00:23:51.350 Command & Feature Lockdown Capability: Not Supported 00:23:51.350 Abort Command Limit: 4 00:23:51.350 Async Event Request Limit: 4 00:23:51.350 Number of Firmware Slots: N/A 00:23:51.350 Firmware Slot 1 Read-Only: N/A 00:23:51.350 Firmware Activation Without Reset: N/A 00:23:51.350 Multiple Update Detection Support: N/A 00:23:51.350 Firmware Update Granularity: No Information Provided 00:23:51.350 Per-Namespace SMART Log: No 00:23:51.350 Asymmetric Namespace Access Log Page: Not Supported 00:23:51.350 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:51.350 Command Effects Log Page: Supported 00:23:51.350 Get Log Page Extended Data: Supported 00:23:51.350 Telemetry Log Pages: Not Supported 00:23:51.350 Persistent Event Log Pages: Not Supported 00:23:51.350 Supported Log Pages Log Page: May Support 00:23:51.350 Commands Supported & Effects Log Page: Not Supported 00:23:51.350 Feature Identifiers & Effects Log Page:May Support 00:23:51.350 NVMe-MI Commands & Effects Log Page: May Support 00:23:51.350 Data Area 4 for Telemetry Log: Not Supported 00:23:51.350 Error Log Page Entries Supported: 128 00:23:51.350 Keep Alive: Supported 00:23:51.350 Keep Alive Granularity: 10000 ms 00:23:51.350 00:23:51.350 NVM Command Set Attributes 00:23:51.350 ========================== 00:23:51.350 Submission Queue Entry Size 00:23:51.350 Max: 64 00:23:51.351 Min: 64 00:23:51.351 Completion Queue Entry Size 00:23:51.351 Max: 16 00:23:51.351 Min: 16 00:23:51.351 Number of Namespaces: 32 00:23:51.351 Compare Command: Supported 00:23:51.351 Write Uncorrectable Command: Not Supported 00:23:51.351 Dataset Management Command: Supported 00:23:51.351 Write Zeroes Command: Supported 00:23:51.351 Set Features Save Field: Not Supported 00:23:51.351 Reservations: Supported 00:23:51.351 Timestamp: Not Supported 00:23:51.351 Copy: Supported 00:23:51.351 Volatile Write Cache: Present 00:23:51.351 Atomic Write Unit (Normal): 1 00:23:51.351 Atomic Write Unit (PFail): 1 00:23:51.351 Atomic Compare & Write Unit: 1 00:23:51.351 Fused Compare & Write: Supported 00:23:51.351 Scatter-Gather List 00:23:51.351 SGL Command Set: Supported 00:23:51.351 SGL Keyed: Supported 00:23:51.351 SGL Bit Bucket Descriptor: Not Supported 00:23:51.351 SGL Metadata Pointer: Not Supported 00:23:51.351 Oversized SGL: Not Supported 00:23:51.351 SGL Metadata Address: Not Supported 00:23:51.351 SGL Offset: Supported 00:23:51.351 Transport SGL Data Block: Not Supported 00:23:51.351 Replay Protected Memory Block: Not Supported 00:23:51.351 00:23:51.351 Firmware Slot Information 00:23:51.351 ========================= 00:23:51.351 Active slot: 1 00:23:51.351 Slot 1 Firmware Revision: 24.05 00:23:51.351 00:23:51.351 00:23:51.351 Commands Supported and Effects 00:23:51.351 ============================== 00:23:51.351 Admin Commands 00:23:51.351 -------------- 00:23:51.351 Get Log Page (02h): Supported 00:23:51.351 Identify (06h): Supported 00:23:51.351 Abort (08h): Supported 00:23:51.351 Set Features (09h): Supported 00:23:51.351 Get Features (0Ah): Supported 00:23:51.351 Asynchronous Event Request (0Ch): Supported 00:23:51.351 Keep Alive (18h): Supported 00:23:51.351 I/O Commands 00:23:51.351 ------------ 00:23:51.351 Flush (00h): Supported LBA-Change 00:23:51.351 Write (01h): Supported LBA-Change 00:23:51.351 Read (02h): Supported 00:23:51.351 Compare (05h): Supported 00:23:51.351 Write Zeroes (08h): Supported LBA-Change 00:23:51.351 Dataset Management (09h): Supported LBA-Change 00:23:51.351 Copy (19h): Supported LBA-Change 00:23:51.351 Unknown (79h): Supported LBA-Change 00:23:51.351 Unknown (7Ah): Supported 00:23:51.351 00:23:51.351 Error Log 00:23:51.351 ========= 00:23:51.351 00:23:51.351 Arbitration 00:23:51.351 =========== 00:23:51.351 Arbitration Burst: 1 00:23:51.351 00:23:51.351 Power Management 00:23:51.351 ================ 00:23:51.351 Number of Power States: 1 00:23:51.351 Current Power State: Power State #0 00:23:51.351 Power State #0: 00:23:51.351 Max Power: 0.00 W 00:23:51.351 Non-Operational State: Operational 00:23:51.351 Entry Latency: Not Reported 00:23:51.351 Exit Latency: Not Reported 00:23:51.351 Relative Read Throughput: 0 00:23:51.351 Relative Read Latency: 0 00:23:51.351 Relative Write Throughput: 0 00:23:51.351 Relative Write Latency: 0 00:23:51.351 Idle Power: Not Reported 00:23:51.351 Active Power: Not Reported 00:23:51.351 Non-Operational Permissive Mode: Not Supported 00:23:51.351 00:23:51.351 Health Information 00:23:51.351 ================== 00:23:51.351 Critical Warnings: 00:23:51.351 Available Spare Space: OK 00:23:51.351 Temperature: OK 00:23:51.351 Device Reliability: OK 00:23:51.351 Read Only: No 00:23:51.351 Volatile Memory Backup: OK 00:23:51.351 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:51.351 Temperature Threshold: ected_datao=0, payload_size=4096 00:23:51.351 [2024-05-15 10:04:28.724987] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.351 [2024-05-15 10:04:28.724996] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:51.351 [2024-05-15 10:04:28.725001] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:51.351 [2024-05-15 10:04:28.725012] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.351 [2024-05-15 10:04:28.725019] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.351 [2024-05-15 10:04:28.725023] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.351 [2024-05-15 10:04:28.725028] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2293030) on tqpair=0x224a280 00:23:51.351 [2024-05-15 10:04:28.725060] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.351 [2024-05-15 10:04:28.725067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.351 [2024-05-15 10:04:28.725072] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.351 [2024-05-15 10:04:28.725077] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292ed0) on tqpair=0x224a280 00:23:51.351 [2024-05-15 10:04:28.725103] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.351 [2024-05-15 10:04:28.725110] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.351 [2024-05-15 10:04:28.725115] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.351 [2024-05-15 10:04:28.725119] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2293190) on tqpair=0x224a280 00:23:51.351 [2024-05-15 10:04:28.725133] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.351 [2024-05-15 10:04:28.725140] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.351 [2024-05-15 10:04:28.725145] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.351 [2024-05-15 10:04:28.725149] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22932f0) on tqpair=0x224a280 00:23:51.351 [2024-05-15 10:04:28.725280] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.351 [2024-05-15 10:04:28.725286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x224a280) 00:23:51.351 [2024-05-15 10:04:28.725294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.351 [2024-05-15 10:04:28.725640] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22932f0, cid 7, qid 0 00:23:51.351 [2024-05-15 10:04:28.725727] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.615 [2024-05-15 10:04:28.725803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.615 [2024-05-15 10:04:28.725814] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.615 [2024-05-15 10:04:28.725820] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22932f0) on tqpair=0x224a280 00:23:51.615 [2024-05-15 10:04:28.725879] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:51.615 [2024-05-15 10:04:28.725896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.615 [2024-05-15 10:04:28.725904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.615 [2024-05-15 10:04:28.725912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.615 [2024-05-15 10:04:28.725920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:51.615 [2024-05-15 10:04:28.725931] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.615 [2024-05-15 10:04:28.725936] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.615 [2024-05-15 10:04:28.725941] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.615 [2024-05-15 10:04:28.725949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.725977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.726030] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.726037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.726042] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726046] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.726056] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726061] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726065] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.726072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.726106] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.726198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.726204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.726209] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.726221] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:51.616 [2024-05-15 10:04:28.726227] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:51.616 [2024-05-15 10:04:28.726238] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.726255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.726273] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.726324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.726331] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.726337] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726342] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.726354] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726359] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726364] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.726371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.726388] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.726450] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.726457] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.726461] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726466] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.726477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726482] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726487] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.726494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.726510] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.726560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.726567] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.726571] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726576] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.726588] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726593] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.726605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.726622] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.726675] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.726682] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.726686] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726691] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.726702] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726707] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726711] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.726718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.726735] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.726785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.726791] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.726797] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726802] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.726813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726822] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.726829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.726847] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.726914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.726921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.726925] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726930] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.726941] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726946] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.726950] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.726958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.726974] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.727046] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.727053] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.727057] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727062] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.727073] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727078] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.727102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.727122] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.727175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.727181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.727186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727190] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.727202] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727207] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727211] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.727218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.727237] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.727306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.727313] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.727319] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727324] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.727336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.616 [2024-05-15 10:04:28.727353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.616 [2024-05-15 10:04:28.727370] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.616 [2024-05-15 10:04:28.727424] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.616 [2024-05-15 10:04:28.727431] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.616 [2024-05-15 10:04:28.727435] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727440] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.616 [2024-05-15 10:04:28.727451] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727456] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.616 [2024-05-15 10:04:28.727460] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.727467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.727484] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.727546] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.727553] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.727557] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727562] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.727573] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727578] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727583] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.727590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.727607] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.727672] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.727678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.727683] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727687] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.727699] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.727715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.727732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.727786] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.727792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.727798] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727802] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.727813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727823] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.727830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.727847] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.727925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.727932] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.727937] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.727953] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727958] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.727963] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.727970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.727986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.728050] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.728056] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.728061] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728065] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.728077] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728082] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728086] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.728104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.728132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.728200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.728207] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.728211] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728216] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.728227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728232] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.728244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.728261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.728314] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.728320] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.728326] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728331] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.728342] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728347] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728352] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.728359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.728376] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.728426] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.728433] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.728437] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728442] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.728453] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728458] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728462] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.728470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.728486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.728537] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.728543] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.728548] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728553] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.728564] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728569] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728573] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.728581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.728598] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.728654] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.728661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.728666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728670] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.728681] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728686] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.728698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.728714] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.728767] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.728774] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.728779] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728784] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.728795] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728804] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.617 [2024-05-15 10:04:28.728812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.617 [2024-05-15 10:04:28.728828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.617 [2024-05-15 10:04:28.728889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.617 [2024-05-15 10:04:28.728896] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.617 [2024-05-15 10:04:28.728901] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728905] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.617 [2024-05-15 10:04:28.728916] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728921] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.617 [2024-05-15 10:04:28.728926] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.728933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.728950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729005] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729017] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729021] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729032] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729037] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729042] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.729049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.729066] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729138] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729142] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729147] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729158] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729163] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.729175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.729192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729245] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729257] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729273] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.729290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.729306] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729369] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729373] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729378] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729389] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729394] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729398] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.729406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.729422] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729472] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729479] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729483] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729488] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729499] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.729516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.729541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729599] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729603] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729608] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729628] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.729636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.729652] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729710] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729722] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729727] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729738] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729743] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729748] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.729755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.729772] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729830] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729848] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729852] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729856] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729867] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729872] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729876] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.729882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.729898] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.729961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.729968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.729972] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.729987] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729991] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.729996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.730003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.730019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.730082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.730088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.730092] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.730097] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.730749] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.730787] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.730812] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.730904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.731004] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.731284] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.731350] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.731407] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.731439] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.731527] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.731556] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.731611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.731647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.731751] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.618 [2024-05-15 10:04:28.731837] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.618 [2024-05-15 10:04:28.731871] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.618 [2024-05-15 10:04:28.731897] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.731934] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.618 [2024-05-15 10:04:28.731989] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.732016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.618 [2024-05-15 10:04:28.732042] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.618 [2024-05-15 10:04:28.732070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.618 [2024-05-15 10:04:28.732218] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.732329] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.732363] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.732389] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.732462] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.732562] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.732665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.732697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.732764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.732835] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.732886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.732921] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.732947] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.733018] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.733081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.733144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.733191] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.733220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.733306] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.733362] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.733392] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.733462] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.733495] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.733623] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.733684] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.733716] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.733794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.733866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.733937] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.733970] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.734035] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.734067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.734213] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.734278] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.734309] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.734364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.734436] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.734488] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.734518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.734543] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.734614] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.734676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.734725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.734780] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.734815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.734923] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.734997] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.735049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.735210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.735244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.735336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.735365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.735428] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.735463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.735607] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.735678] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.735742] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.735801] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.735834] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.735940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.735969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.736005] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.736034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.736243] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.736327] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.736358] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.736383] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.736421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.736472] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.736498] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.736522] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.736622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.736690] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.736758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.736787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.736813] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.736838] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.736958] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.737044] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.737075] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.737152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.737224] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.737286] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.737327] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.737352] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.737378] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.737509] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.737581] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.737613] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.737678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.737769] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.737842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.737884] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.737910] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.737941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.738022] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.738049] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.738078] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.738120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.738211] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.738365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.738377] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.619 [2024-05-15 10:04:28.738382] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.738387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.619 [2024-05-15 10:04:28.738401] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.738406] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.619 [2024-05-15 10:04:28.738410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.619 [2024-05-15 10:04:28.738419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-05-15 10:04:28.738442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.619 [2024-05-15 10:04:28.738501] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.619 [2024-05-15 10:04:28.738508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.738512] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738517] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.738528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738533] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738538] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.738545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.738563] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.738625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.738632] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.738636] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738641] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.738652] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738657] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738662] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.738669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.738686] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.738746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.738753] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.738757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738762] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.738773] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738778] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738782] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.738789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.738806] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.738867] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.738876] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.738881] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738886] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.738897] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738907] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.738914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.738932] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.738982] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.738989] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.738994] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.738998] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.739009] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739030] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.739037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.739055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.739143] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.739151] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.739156] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739160] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.739172] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739177] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739181] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.739189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.739207] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.739275] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.739281] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.739286] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739291] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.739302] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739307] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739311] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.739318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.739335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.739403] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.739411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.739416] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739420] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.739431] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739436] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739441] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.739448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.739466] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.739525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.739532] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.739536] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739541] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.739552] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739557] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739561] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.620 [2024-05-15 10:04:28.739569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-05-15 10:04:28.739586] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.620 [2024-05-15 10:04:28.739640] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.620 [2024-05-15 10:04:28.739647] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.620 [2024-05-15 10:04:28.739651] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.620 [2024-05-15 10:04:28.739667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739672] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.620 [2024-05-15 10:04:28.739676] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.739684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.739701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.739754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.739761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.739765] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.739770] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.739781] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.739786] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.739791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.739798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.739816] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.739875] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.739883] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.739887] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.739892] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.739904] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.739909] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.739913] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.739921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.739938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.739994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740001] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740006] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740010] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740021] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740026] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740031] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.740055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.740112] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740119] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740128] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740139] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740149] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.740174] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.740239] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740246] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740250] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740255] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740276] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.740301] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.740367] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740375] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740384] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740406] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740411] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.740439] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.740507] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740518] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740522] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740532] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740537] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740541] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.740565] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.740614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740621] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740625] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740630] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740640] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740645] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740649] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.740672] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.740727] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740733] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740737] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740742] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740752] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740761] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.740784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.740833] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740840] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740844] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740859] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740864] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740868] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.740892] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.740950] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.740956] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.740960] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740965] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.740975] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740980] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.740984] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.740991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.741007] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.741060] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.621 [2024-05-15 10:04:28.741067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.621 [2024-05-15 10:04:28.741071] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.741075] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.621 [2024-05-15 10:04:28.741086] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.741091] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.621 [2024-05-15 10:04:28.741095] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.621 [2024-05-15 10:04:28.741110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-05-15 10:04:28.741127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.621 [2024-05-15 10:04:28.741177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.741183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.741188] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741192] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.741202] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741207] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741211] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.741218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.741234] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.741285] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.741292] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.741297] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741301] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.741312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741321] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.741328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.741344] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.741397] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.741403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.741407] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741412] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.741422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741427] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741431] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.741438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.741455] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.741505] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.741511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.741515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741519] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.741530] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741535] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741539] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.741546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.741562] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.741612] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.741619] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.741623] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741627] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.741638] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741642] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741647] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.741653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.741670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.741729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.741736] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.741741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741745] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.741756] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741761] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741765] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.741771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.741788] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.741841] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.741847] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.741851] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741856] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.741866] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741871] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741875] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.741882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.741898] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.741954] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.741960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.741965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741969] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.741979] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741984] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.741988] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.741995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.742011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.742066] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.742072] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.742077] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.742100] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742105] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742110] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.742116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.742134] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.742182] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.742189] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.742193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.742209] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742213] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742218] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.742224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.742241] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.742306] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.742312] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.742316] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742321] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.742331] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742336] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742340] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.742347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.742363] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.742418] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.742424] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.742429] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742433] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.622 [2024-05-15 10:04:28.742443] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.622 [2024-05-15 10:04:28.742452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.622 [2024-05-15 10:04:28.742459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.622 [2024-05-15 10:04:28.742476] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.622 [2024-05-15 10:04:28.742525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.622 [2024-05-15 10:04:28.742532] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.622 [2024-05-15 10:04:28.742536] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742540] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.623 [2024-05-15 10:04:28.742551] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742555] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742559] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.623 [2024-05-15 10:04:28.742566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.623 [2024-05-15 10:04:28.742583] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.623 [2024-05-15 10:04:28.742630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.623 [2024-05-15 10:04:28.742638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.623 [2024-05-15 10:04:28.742642] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742647] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.623 [2024-05-15 10:04:28.742657] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742662] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.623 [2024-05-15 10:04:28.742673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.623 [2024-05-15 10:04:28.742689] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.623 [2024-05-15 10:04:28.742743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.623 [2024-05-15 10:04:28.742749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.623 [2024-05-15 10:04:28.742754] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742758] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.623 [2024-05-15 10:04:28.742768] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742773] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742777] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.623 [2024-05-15 10:04:28.742784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.623 [2024-05-15 10:04:28.742800] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.623 [2024-05-15 10:04:28.742847] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.623 [2024-05-15 10:04:28.742853] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.623 [2024-05-15 10:04:28.742857] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742862] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.623 [2024-05-15 10:04:28.742872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742877] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742881] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.623 [2024-05-15 10:04:28.742888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.623 [2024-05-15 10:04:28.742917] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.623 [2024-05-15 10:04:28.742972] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.623 [2024-05-15 10:04:28.742978] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.623 [2024-05-15 10:04:28.742983] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.742987] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.623 [2024-05-15 10:04:28.742997] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.743002] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.743006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.623 [2024-05-15 10:04:28.743021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.623 [2024-05-15 10:04:28.743038] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.623 [2024-05-15 10:04:28.756152] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.623 [2024-05-15 10:04:28.756330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.623 [2024-05-15 10:04:28.756397] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.756457] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.623 [2024-05-15 10:04:28.756607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.756675] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.756707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x224a280) 00:23:51.623 [2024-05-15 10:04:28.756774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.623 [2024-05-15 10:04:28.756921] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2292d70, cid 3, qid 0 00:23:51.623 [2024-05-15 10:04:28.757003] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:51.623 [2024-05-15 10:04:28.757038] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:51.623 [2024-05-15 10:04:28.757102] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:51.623 [2024-05-15 10:04:28.757163] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2292d70) on tqpair=0x224a280 00:23:51.623 [2024-05-15 10:04:28.757283] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 31 milliseconds 00:23:51.623 0 Kelvin (-273 Celsius) 00:23:51.623 Available Spare: 0% 00:23:51.623 Available Spare Threshold: 0% 00:23:51.623 Life Percentage Used: 0% 00:23:51.623 Data Units Read: 0 00:23:51.623 Data Units Written: 0 00:23:51.623 Host Read Commands: 0 00:23:51.623 Host Write Commands: 0 00:23:51.623 Controller Busy Time: 0 minutes 00:23:51.623 Power Cycles: 0 00:23:51.623 Power On Hours: 0 hours 00:23:51.623 Unsafe Shutdowns: 0 00:23:51.623 Unrecoverable Media Errors: 0 00:23:51.623 Lifetime Error Log Entries: 0 00:23:51.623 Warning Temperature Time: 0 minutes 00:23:51.623 Critical Temperature Time: 0 minutes 00:23:51.623 00:23:51.623 Number of Queues 00:23:51.623 ================ 00:23:51.623 Number of I/O Submission Queues: 127 00:23:51.623 Number of I/O Completion Queues: 127 00:23:51.623 00:23:51.623 Active Namespaces 00:23:51.623 ================= 00:23:51.623 Namespace ID:1 00:23:51.623 Error Recovery Timeout: Unlimited 00:23:51.623 Command Set Identifier: NVM (00h) 00:23:51.623 Deallocate: Supported 00:23:51.623 Deallocated/Unwritten Error: Not Supported 00:23:51.623 Deallocated Read Value: Unknown 00:23:51.623 Deallocate in Write Zeroes: Not Supported 00:23:51.623 Deallocated Guard Field: 0xFFFF 00:23:51.623 Flush: Supported 00:23:51.623 Reservation: Supported 00:23:51.623 Namespace Sharing Capabilities: Multiple Controllers 00:23:51.623 Size (in LBAs): 131072 (0GiB) 00:23:51.623 Capacity (in LBAs): 131072 (0GiB) 00:23:51.623 Utilization (in LBAs): 131072 (0GiB) 00:23:51.623 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:51.623 EUI64: ABCDEF0123456789 00:23:51.623 UUID: 548953aa-1133-4e0e-bbf6-181c2e2fc855 00:23:51.623 Thin Provisioning: Not Supported 00:23:51.623 Per-NS Atomic Units: Yes 00:23:51.623 Atomic Boundary Size (Normal): 0 00:23:51.623 Atomic Boundary Size (PFail): 0 00:23:51.623 Atomic Boundary Offset: 0 00:23:51.623 Maximum Single Source Range Length: 65535 00:23:51.623 Maximum Copy Length: 65535 00:23:51.623 Maximum Source Range Count: 1 00:23:51.623 NGUID/EUI64 Never Reused: No 00:23:51.623 Namespace Write Protected: No 00:23:51.623 Number of LBA Formats: 1 00:23:51.623 Current LBA Format: LBA Format #00 00:23:51.623 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:51.623 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:51.623 rmmod nvme_tcp 00:23:51.623 rmmod nvme_fabrics 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86427 ']' 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86427 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 86427 ']' 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 86427 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 86427 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 86427' 00:23:51.623 killing process with pid 86427 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 86427 00:23:51.623 [2024-05-15 10:04:28.912602] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:51.623 10:04:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 86427 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:52.192 00:23:52.192 real 0m3.175s 00:23:52.192 user 0m8.353s 00:23:52.192 sys 0m0.915s 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:52.192 10:04:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:52.192 ************************************ 00:23:52.192 END TEST nvmf_identify 00:23:52.192 ************************************ 00:23:52.192 10:04:29 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:52.192 10:04:29 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:52.192 10:04:29 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:52.192 10:04:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.192 ************************************ 00:23:52.192 START TEST nvmf_perf 00:23:52.192 ************************************ 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:52.192 * Looking for test storage... 00:23:52.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:52.192 10:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:52.193 Cannot find device "nvmf_tgt_br" 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:52.193 Cannot find device "nvmf_tgt_br2" 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:23:52.193 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:52.451 Cannot find device "nvmf_tgt_br" 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:52.451 Cannot find device "nvmf_tgt_br2" 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:52.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:52.451 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:52.451 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:52.710 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:52.710 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:52.710 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:52.710 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:52.710 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:52.710 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:52.710 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:52.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:23:52.710 00:23:52.710 --- 10.0.0.2 ping statistics --- 00:23:52.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.710 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:52.710 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:52.710 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:52.710 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:52.710 00:23:52.710 --- 10.0.0.3 ping statistics --- 00:23:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.711 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:52.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:52.711 00:23:52.711 --- 10.0.0.1 ping statistics --- 00:23:52.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.711 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86658 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86658 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 86658 ']' 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:52.711 10:04:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.711 [2024-05-15 10:04:29.996423] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:23:52.711 [2024-05-15 10:04:29.996512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.970 [2024-05-15 10:04:30.134811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.970 [2024-05-15 10:04:30.296634] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.970 [2024-05-15 10:04:30.296698] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.970 [2024-05-15 10:04:30.296710] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.970 [2024-05-15 10:04:30.296720] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.970 [2024-05-15 10:04:30.296728] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.970 [2024-05-15 10:04:30.296910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.970 [2024-05-15 10:04:30.297233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.970 [2024-05-15 10:04:30.297854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.970 [2024-05-15 10:04:30.297854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.904 10:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:53.904 10:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:23:53.904 10:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.904 10:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:53.904 10:04:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:53.904 10:04:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.904 10:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:53.904 10:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:54.162 10:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:54.162 10:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:54.420 10:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:54.420 10:04:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:54.986 10:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:54.986 10:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:54.986 10:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:54.986 10:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:54.986 10:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:55.244 [2024-05-15 10:04:32.468209] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.244 10:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.502 10:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:55.502 10:04:32 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:55.761 10:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:55.761 10:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:56.019 10:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.277 [2024-05-15 10:04:33.601441] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:56.277 [2024-05-15 10:04:33.604734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.277 10:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:56.844 10:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:56.844 10:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:56.844 10:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:56.844 10:04:33 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:57.780 Initializing NVMe Controllers 00:23:57.780 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:57.780 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:57.780 Initialization complete. Launching workers. 00:23:57.780 ======================================================== 00:23:57.780 Latency(us) 00:23:57.780 Device Information : IOPS MiB/s Average min max 00:23:57.780 PCIE (0000:00:10.0) NSID 1 from core 0: 21504.00 84.00 1486.51 358.30 14237.17 00:23:57.780 ======================================================== 00:23:57.780 Total : 21504.00 84.00 1486.51 358.30 14237.17 00:23:57.780 00:23:57.780 10:04:35 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:59.155 Initializing NVMe Controllers 00:23:59.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:59.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:59.155 Initialization complete. Launching workers. 00:23:59.155 ======================================================== 00:23:59.155 Latency(us) 00:23:59.155 Device Information : IOPS MiB/s Average min max 00:23:59.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3557.99 13.90 280.79 96.24 13313.76 00:23:59.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 39.00 0.15 26152.04 22330.27 27050.42 00:23:59.155 ======================================================== 00:23:59.155 Total : 3596.99 14.05 561.30 96.24 27050.42 00:23:59.155 00:23:59.419 10:04:36 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:00.792 Initializing NVMe Controllers 00:24:00.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:00.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:00.792 Initialization complete. Launching workers. 00:24:00.792 ======================================================== 00:24:00.792 Latency(us) 00:24:00.792 Device Information : IOPS MiB/s Average min max 00:24:00.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9249.79 36.13 3459.83 751.76 17229.13 00:24:00.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 842.34 3.29 38142.01 23897.65 67070.49 00:24:00.792 ======================================================== 00:24:00.792 Total : 10092.13 39.42 6354.58 751.76 67070.49 00:24:00.792 00:24:00.792 10:04:38 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:24:00.792 10:04:38 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:04.071 Initializing NVMe Controllers 00:24:04.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.071 Controller IO queue size 128, less than required. 00:24:04.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.071 Controller IO queue size 128, less than required. 00:24:04.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:04.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:04.071 Initialization complete. Launching workers. 00:24:04.071 ======================================================== 00:24:04.071 Latency(us) 00:24:04.071 Device Information : IOPS MiB/s Average min max 00:24:04.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1498.75 374.69 87012.14 51447.15 157981.24 00:24:04.071 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 217.95 54.49 658555.31 313111.43 1076657.69 00:24:04.071 ======================================================== 00:24:04.071 Total : 1716.69 429.17 159573.27 51447.15 1076657.69 00:24:04.071 00:24:04.071 10:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:04.071 Initializing NVMe Controllers 00:24:04.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:04.071 Controller IO queue size 128, less than required. 00:24:04.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.071 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:04.071 Controller IO queue size 128, less than required. 00:24:04.071 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:04.071 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:24:04.071 WARNING: Some requested NVMe devices were skipped 00:24:04.071 No valid NVMe controllers or AIO or URING devices found 00:24:04.071 10:04:41 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:07.354 Initializing NVMe Controllers 00:24:07.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.354 Controller IO queue size 128, less than required. 00:24:07.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.354 Controller IO queue size 128, less than required. 00:24:07.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:07.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:07.354 Initialization complete. Launching workers. 00:24:07.354 00:24:07.354 ==================== 00:24:07.354 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:07.354 TCP transport: 00:24:07.354 polls: 20124 00:24:07.354 idle_polls: 17573 00:24:07.354 sock_completions: 2551 00:24:07.354 nvme_completions: 4943 00:24:07.354 submitted_requests: 7440 00:24:07.354 queued_requests: 1 00:24:07.354 00:24:07.354 ==================== 00:24:07.354 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:07.354 TCP transport: 00:24:07.354 polls: 16305 00:24:07.354 idle_polls: 13468 00:24:07.354 sock_completions: 2837 00:24:07.354 nvme_completions: 5045 00:24:07.354 submitted_requests: 7548 00:24:07.354 queued_requests: 1 00:24:07.354 ======================================================== 00:24:07.354 Latency(us) 00:24:07.354 Device Information : IOPS MiB/s Average min max 00:24:07.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1235.37 308.84 106089.89 62976.16 181396.55 00:24:07.354 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1260.87 315.22 102934.27 32161.35 163930.01 00:24:07.354 ======================================================== 00:24:07.354 Total : 2496.24 624.06 104495.96 32161.35 181396.55 00:24:07.354 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.354 rmmod nvme_tcp 00:24:07.354 rmmod nvme_fabrics 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86658 ']' 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86658 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 86658 ']' 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 86658 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 86658 00:24:07.354 killing process with pid 86658 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 86658' 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 86658 00:24:07.354 [2024-05-15 10:04:44.708848] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:07.354 10:04:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 86658 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:08.287 00:24:08.287 real 0m16.100s 00:24:08.287 user 0m58.811s 00:24:08.287 sys 0m4.484s 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:08.287 10:04:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:08.287 ************************************ 00:24:08.287 END TEST nvmf_perf 00:24:08.287 ************************************ 00:24:08.287 10:04:45 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:08.287 10:04:45 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:08.287 10:04:45 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:08.287 10:04:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:08.287 ************************************ 00:24:08.287 START TEST nvmf_fio_host 00:24:08.287 ************************************ 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:08.287 * Looking for test storage... 00:24:08.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.287 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:08.547 Cannot find device "nvmf_tgt_br" 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:08.547 Cannot find device "nvmf_tgt_br2" 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:08.547 Cannot find device "nvmf_tgt_br" 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:08.547 Cannot find device "nvmf_tgt_br2" 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:08.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:08.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:08.547 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:08.805 10:04:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:08.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:24:08.805 00:24:08.805 --- 10.0.0.2 ping statistics --- 00:24:08.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.805 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:08.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:08.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:24:08.805 00:24:08.805 --- 10.0.0.3 ping statistics --- 00:24:08.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.805 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:08.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:24:08.805 00:24:08.805 --- 10.0.0.1 ping statistics --- 00:24:08.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.805 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:08.805 10:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=87154 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 87154 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 87154 ']' 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:08.806 10:04:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.806 [2024-05-15 10:04:46.152609] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:24:08.806 [2024-05-15 10:04:46.152982] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.063 [2024-05-15 10:04:46.306675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.320 [2024-05-15 10:04:46.454298] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.320 [2024-05-15 10:04:46.454363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.321 [2024-05-15 10:04:46.454375] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.321 [2024-05-15 10:04:46.454385] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.321 [2024-05-15 10:04:46.454394] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.321 [2024-05-15 10:04:46.454567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.321 [2024-05-15 10:04:46.454685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.321 [2024-05-15 10:04:46.455533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.321 [2024-05-15 10:04:46.455559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.888 [2024-05-15 10:04:47.135866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.888 Malloc1 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.888 [2024-05-15 10:04:47.242543] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:09.888 [2024-05-15 10:04:47.243383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:09.888 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:10.147 10:04:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:10.147 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:10.147 fio-3.35 00:24:10.147 Starting 1 thread 00:24:12.726 00:24:12.726 test: (groupid=0, jobs=1): err= 0: pid=87233: Wed May 15 10:04:49 2024 00:24:12.726 read: IOPS=9582, BW=37.4MiB/s (39.2MB/s)(75.1MiB/2006msec) 00:24:12.726 slat (nsec): min=1618, max=210386, avg=2234.00, stdev=2165.14 00:24:12.726 clat (usec): min=1736, max=11846, avg=6986.39, stdev=630.07 00:24:12.726 lat (usec): min=1743, max=11848, avg=6988.62, stdev=630.05 00:24:12.726 clat percentiles (usec): 00:24:12.726 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6259], 20.00th=[ 6521], 00:24:12.726 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:24:12.726 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7701], 95.00th=[ 7898], 00:24:12.726 | 99.00th=[ 8586], 99.50th=[ 9503], 99.90th=[11338], 99.95th=[11469], 00:24:12.726 | 99.99th=[11863] 00:24:12.726 bw ( KiB/s): min=36704, max=39792, per=99.93%, avg=38306.00, stdev=1263.40, samples=4 00:24:12.726 iops : min= 9176, max= 9948, avg=9576.50, stdev=315.85, samples=4 00:24:12.726 write: IOPS=9589, BW=37.5MiB/s (39.3MB/s)(75.1MiB/2006msec); 0 zone resets 00:24:12.726 slat (nsec): min=1665, max=150069, avg=2305.46, stdev=1952.80 00:24:12.726 clat (usec): min=1439, max=11445, avg=6312.44, stdev=553.86 00:24:12.726 lat (usec): min=1447, max=11447, avg=6314.74, stdev=553.85 00:24:12.726 clat percentiles (usec): 00:24:12.726 | 1.00th=[ 5080], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5932], 00:24:12.726 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:24:12.726 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:24:12.726 | 99.00th=[ 7570], 99.50th=[ 8225], 99.90th=[10552], 99.95th=[11076], 00:24:12.726 | 99.99th=[11338] 00:24:12.726 bw ( KiB/s): min=37536, max=40000, per=100.00%, avg=38358.00, stdev=1157.83, samples=4 00:24:12.726 iops : min= 9384, max=10000, avg=9589.50, stdev=289.46, samples=4 00:24:12.726 lat (msec) : 2=0.04%, 4=0.11%, 10=99.59%, 20=0.25% 00:24:12.726 cpu : usr=69.38%, sys=24.44%, ctx=40, majf=0, minf=3 00:24:12.726 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:12.726 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:12.726 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:12.726 issued rwts: total=19223,19236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:12.726 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:12.726 00:24:12.726 Run status group 0 (all jobs): 00:24:12.726 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=75.1MiB (78.7MB), run=2006-2006msec 00:24:12.726 WRITE: bw=37.5MiB/s (39.3MB/s), 37.5MiB/s-37.5MiB/s (39.3MB/s-39.3MB/s), io=75.1MiB (78.8MB), run=2006-2006msec 00:24:12.726 10:04:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:12.726 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:12.726 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib= 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:12.727 10:04:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:12.727 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:12.727 fio-3.35 00:24:12.727 Starting 1 thread 00:24:15.267 00:24:15.267 test: (groupid=0, jobs=1): err= 0: pid=87276: Wed May 15 10:04:52 2024 00:24:15.267 read: IOPS=8270, BW=129MiB/s (136MB/s)(259MiB/2005msec) 00:24:15.267 slat (usec): min=2, max=126, avg= 3.43, stdev= 1.78 00:24:15.267 clat (usec): min=3082, max=17221, avg=9019.23, stdev=2304.79 00:24:15.267 lat (usec): min=3085, max=17224, avg=9022.66, stdev=2304.87 00:24:15.267 clat percentiles (usec): 00:24:15.267 | 1.00th=[ 4424], 5.00th=[ 5473], 10.00th=[ 6063], 20.00th=[ 6980], 00:24:15.267 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9503], 00:24:15.267 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11994], 95.00th=[13304], 00:24:15.267 | 99.00th=[14877], 99.50th=[15795], 99.90th=[16909], 99.95th=[16909], 00:24:15.267 | 99.99th=[17171] 00:24:15.267 bw ( KiB/s): min=58752, max=74720, per=50.08%, avg=66272.00, stdev=7128.80, samples=4 00:24:15.267 iops : min= 3672, max= 4670, avg=4142.00, stdev=445.55, samples=4 00:24:15.267 write: IOPS=4655, BW=72.7MiB/s (76.3MB/s)(135MiB/1856msec); 0 zone resets 00:24:15.267 slat (usec): min=29, max=206, avg=37.60, stdev= 7.30 00:24:15.267 clat (usec): min=4215, max=22588, avg=11570.78, stdev=2419.73 00:24:15.267 lat (usec): min=4247, max=22637, avg=11608.38, stdev=2420.53 00:24:15.267 clat percentiles (usec): 00:24:15.267 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:24:15.267 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11207], 60.00th=[11731], 00:24:15.267 | 70.00th=[12518], 80.00th=[13435], 90.00th=[14877], 95.00th=[15664], 00:24:15.267 | 99.00th=[19006], 99.50th=[19530], 99.90th=[22152], 99.95th=[22414], 00:24:15.267 | 99.99th=[22676] 00:24:15.267 bw ( KiB/s): min=62784, max=76928, per=92.10%, avg=68600.00, stdev=6133.90, samples=4 00:24:15.267 iops : min= 3924, max= 4808, avg=4287.50, stdev=383.37, samples=4 00:24:15.267 lat (msec) : 4=0.15%, 10=53.62%, 20=46.07%, 50=0.15% 00:24:15.267 cpu : usr=74.60%, sys=18.06%, ctx=8, majf=0, minf=14 00:24:15.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:15.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:15.267 issued rwts: total=16582,8640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:15.267 00:24:15.267 Run status group 0 (all jobs): 00:24:15.267 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=259MiB (272MB), run=2005-2005msec 00:24:15.267 WRITE: bw=72.7MiB/s (76.3MB/s), 72.7MiB/s-72.7MiB/s (76.3MB/s-76.3MB/s), io=135MiB (142MB), run=1856-1856msec 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:15.267 rmmod nvme_tcp 00:24:15.267 rmmod nvme_fabrics 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:15.267 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87154 ']' 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87154 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 87154 ']' 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 87154 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 87154 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:15.268 killing process with pid 87154 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 87154' 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 87154 00:24:15.268 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 87154 00:24:15.268 [2024-05-15 10:04:52.394248] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:15.534 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:15.534 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:15.534 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:15.534 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:15.534 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:15.534 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.535 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.535 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.535 10:04:52 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:15.535 00:24:15.535 real 0m7.308s 00:24:15.535 user 0m27.678s 00:24:15.535 sys 0m2.215s 00:24:15.535 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:15.535 10:04:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.535 ************************************ 00:24:15.535 END TEST nvmf_fio_host 00:24:15.535 ************************************ 00:24:15.535 10:04:52 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:15.535 10:04:52 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:15.535 10:04:52 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:15.535 10:04:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:15.806 ************************************ 00:24:15.806 START TEST nvmf_failover 00:24:15.806 ************************************ 00:24:15.806 10:04:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:15.806 * Looking for test storage... 00:24:15.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:15.806 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:15.807 Cannot find device "nvmf_tgt_br" 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:15.807 Cannot find device "nvmf_tgt_br2" 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:15.807 Cannot find device "nvmf_tgt_br" 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:15.807 Cannot find device "nvmf_tgt_br2" 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:15.807 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:16.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:16.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:16.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:24:16.074 00:24:16.074 --- 10.0.0.2 ping statistics --- 00:24:16.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.074 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:16.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:16.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:24:16.074 00:24:16.074 --- 10.0.0.3 ping statistics --- 00:24:16.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.074 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:16.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:24:16.074 00:24:16.074 --- 10.0.0.1 ping statistics --- 00:24:16.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.074 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.074 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87485 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87485 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 87485 ']' 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:16.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:16.343 10:04:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.343 [2024-05-15 10:04:53.525440] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:24:16.343 [2024-05-15 10:04:53.525559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.343 [2024-05-15 10:04:53.685977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:16.602 [2024-05-15 10:04:53.851739] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.602 [2024-05-15 10:04:53.851809] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.602 [2024-05-15 10:04:53.851820] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.602 [2024-05-15 10:04:53.851830] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.602 [2024-05-15 10:04:53.851838] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.602 [2024-05-15 10:04:53.852823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.602 [2024-05-15 10:04:53.852909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.602 [2024-05-15 10:04:53.852907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.536 10:04:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:17.536 10:04:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:24:17.536 10:04:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:17.536 10:04:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:17.536 10:04:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.536 10:04:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.536 10:04:54 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:17.536 [2024-05-15 10:04:54.907969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.794 10:04:54 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:18.052 Malloc0 00:24:18.052 10:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.311 10:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:18.568 10:04:55 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.826 [2024-05-15 10:04:56.048418] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:18.826 [2024-05-15 10:04:56.049264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.826 10:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.090 [2024-05-15 10:04:56.376963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.090 10:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:19.371 [2024-05-15 10:04:56.701290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87610 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87610 /var/tmp/bdevperf.sock 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 87610 ']' 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:19.371 10:04:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:20.761 10:04:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:20.761 10:04:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:24:20.761 10:04:57 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.020 NVMe0n1 00:24:21.020 10:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.279 00:24:21.279 10:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87652 00:24:21.279 10:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.279 10:04:58 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:22.654 10:04:59 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.654 10:04:59 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:25.941 10:05:02 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:25.941 00:24:25.941 10:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:26.208 [2024-05-15 10:05:03.576749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.577157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.577291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.577443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.577592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.577716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.577840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.577924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.577987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578626] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.578951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.579954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.580998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.581151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.581260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.581337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.581528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.581687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.581805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.208 [2024-05-15 10:05:03.581939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582637] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.582961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.209 [2024-05-15 10:05:03.583024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2520f70 is same with the state(5) to be set 00:24:26.466 10:05:03 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:29.751 10:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.751 [2024-05-15 10:05:06.897664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.751 10:05:06 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:30.684 10:05:07 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:30.941 [2024-05-15 10:05:08.141775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.941 [2024-05-15 10:05:08.142111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.142295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.142441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.142571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.142752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.142865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.143973] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144725] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.144939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.145926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.146049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.146146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.146223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.146299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.146364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.146469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 [2024-05-15 10:05:08.146537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378900 is same with the state(5) to be set 00:24:30.942 10:05:08 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 87652 00:24:37.509 0 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87610 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 87610 ']' 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 87610 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 87610 00:24:37.509 killing process with pid 87610 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:37.509 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 87610' 00:24:37.510 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 87610 00:24:37.510 10:05:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 87610 00:24:37.510 10:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:37.510 [2024-05-15 10:04:56.781783] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:24:37.510 [2024-05-15 10:04:56.781912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87610 ] 00:24:37.510 [2024-05-15 10:04:56.915588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.510 [2024-05-15 10:04:57.077791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.510 Running I/O for 15 seconds... 00:24:37.510 [2024-05-15 10:04:59.894257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.510 [2024-05-15 10:04:59.894343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.510 [2024-05-15 10:04:59.894393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.510 [2024-05-15 10:04:59.894426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.510 [2024-05-15 10:04:59.894459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.510 [2024-05-15 10:04:59.894492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.510 [2024-05-15 10:04:59.894524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.510 [2024-05-15 10:04:59.894557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.894977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.894994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.510 [2024-05-15 10:04:59.895149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.510 [2024-05-15 10:04:59.895617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.510 [2024-05-15 10:04:59.895632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.895968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.895985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.511 [2024-05-15 10:04:59.896845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.896971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.896987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.897005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.897021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.897039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.511 [2024-05-15 10:04:59.897055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.511 [2024-05-15 10:04:59.897073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.897970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.897986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.512 [2024-05-15 10:04:59.898524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.512 [2024-05-15 10:04:59.898541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:04:59.898557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:04:59.898591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:04:59.898625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:04:59.898660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:04:59.898705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:04:59.898744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:04:59.898776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:04:59.898809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe51830 is same with the state(5) to be set 00:24:37.513 [2024-05-15 10:04:59.898847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:37.513 [2024-05-15 10:04:59.898859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:37.513 [2024-05-15 10:04:59.898871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93320 len:8 PRP1 0x0 PRP2 0x0 00:24:37.513 [2024-05-15 10:04:59.898886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.898971] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe51830 was disconnected and freed. reset controller. 00:24:37.513 [2024-05-15 10:04:59.898991] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:37.513 [2024-05-15 10:04:59.899092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.513 [2024-05-15 10:04:59.899122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.899141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.513 [2024-05-15 10:04:59.899158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.899174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.513 [2024-05-15 10:04:59.899190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.899209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.513 [2024-05-15 10:04:59.899225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:04:59.899242] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.513 [2024-05-15 10:04:59.902814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.513 [2024-05-15 10:04:59.902869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde25f0 (9): Bad file descriptor 00:24:37.513 [2024-05-15 10:04:59.937931] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:37.513 [2024-05-15 10:05:03.578040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.513 [2024-05-15 10:05:03.578120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.578140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.513 [2024-05-15 10:05:03.578186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.578206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.513 [2024-05-15 10:05:03.578222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.578237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.513 [2024-05-15 10:05:03.578253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.578268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde25f0 is same with the state(5) to be set 00:24:37.513 [2024-05-15 10:05:03.583601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.513 [2024-05-15 10:05:03.583642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:32 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.583959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.583999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.513 [2024-05-15 10:05:03.584315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.513 [2024-05-15 10:05:03.584331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.584979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.584996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.514 [2024-05-15 10:05:03.585436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.514 [2024-05-15 10:05:03.585452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.585484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.585517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.585549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.515 [2024-05-15 10:05:03.585917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.585951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.585968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.585984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.515 [2024-05-15 10:05:03.586811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.515 [2024-05-15 10:05:03.586827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.586844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.586859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.586876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.586892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.586908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.586924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.586941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.586958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.586975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.586990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.587976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.587993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.588009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.588027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.516 [2024-05-15 10:05:03.588043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.588060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffc300 is same with the state(5) to be set 00:24:37.516 [2024-05-15 10:05:03.588082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:37.516 [2024-05-15 10:05:03.588103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:37.516 [2024-05-15 10:05:03.588131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:920 len:8 PRP1 0x0 PRP2 0x0 00:24:37.516 [2024-05-15 10:05:03.588146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:03.588243] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xffc300 was disconnected and freed. reset controller. 00:24:37.516 [2024-05-15 10:05:03.588263] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:37.516 [2024-05-15 10:05:03.588280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.516 [2024-05-15 10:05:03.588332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde25f0 (9): Bad file descriptor 00:24:37.516 [2024-05-15 10:05:03.591891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.516 [2024-05-15 10:05:03.625592] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:37.516 [2024-05-15 10:05:08.143524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.516 [2024-05-15 10:05:08.143587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:08.143607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.516 [2024-05-15 10:05:08.143625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.516 [2024-05-15 10:05:08.143642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.517 [2024-05-15 10:05:08.143658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.143675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:37.517 [2024-05-15 10:05:08.143691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.143707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde25f0 is same with the state(5) to be set 00:24:37.517 [2024-05-15 10:05:08.146719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.146769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.146799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.146816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.146834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.146850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.146867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.146883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.146901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.146917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.146934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.146951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.146968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.146984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.517 [2024-05-15 10:05:08.147589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.147977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.147993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.148010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.148025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.517 [2024-05-15 10:05:08.148043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.517 [2024-05-15 10:05:08.148058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.148978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.148995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.149011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.149029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.149044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.149061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.149077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.149103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.149119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.149137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.149153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.149174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.149190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.149208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.518 [2024-05-15 10:05:08.149223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.518 [2024-05-15 10:05:08.149241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.149967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.149984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.519 [2024-05-15 10:05:08.150285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.519 [2024-05-15 10:05:08.150318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.519 [2024-05-15 10:05:08.150602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.519 [2024-05-15 10:05:08.150618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.150972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.150988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.151006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.151021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.151039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.151063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.151081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.520 [2024-05-15 10:05:08.151110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.151127] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe56d60 is same with the state(5) to be set 00:24:37.520 [2024-05-15 10:05:08.151149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:37.520 [2024-05-15 10:05:08.151161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:37.520 [2024-05-15 10:05:08.151173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114584 len:8 PRP1 0x0 PRP2 0x0 00:24:37.520 [2024-05-15 10:05:08.151189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.520 [2024-05-15 10:05:08.151266] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe56d60 was disconnected and freed. reset controller. 00:24:37.520 [2024-05-15 10:05:08.151286] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:37.520 [2024-05-15 10:05:08.151302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.520 [2024-05-15 10:05:08.154742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.520 [2024-05-15 10:05:08.154788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde25f0 (9): Bad file descriptor 00:24:37.520 [2024-05-15 10:05:08.190331] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:37.520 00:24:37.520 Latency(us) 00:24:37.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.520 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:37.520 Verification LBA range: start 0x0 length 0x4000 00:24:37.520 NVMe0n1 : 15.01 9911.77 38.72 250.28 0.00 12568.72 557.84 16852.11 00:24:37.520 =================================================================================================================== 00:24:37.520 Total : 9911.77 38.72 250.28 0.00 12568.72 557.84 16852.11 00:24:37.520 Received shutdown signal, test time was about 15.000000 seconds 00:24:37.520 00:24:37.520 Latency(us) 00:24:37.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.520 =================================================================================================================== 00:24:37.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:37.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87862 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87862 /var/tmp/bdevperf.sock 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 87862 ']' 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:37.520 10:05:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:38.123 10:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:38.123 10:05:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:24:38.123 10:05:15 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:38.123 [2024-05-15 10:05:15.493164] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:38.381 10:05:15 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:38.639 [2024-05-15 10:05:15.793488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:38.639 10:05:15 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.896 NVMe0n1 00:24:38.897 10:05:16 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:39.154 00:24:39.154 10:05:16 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:39.718 00:24:39.718 10:05:16 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:39.718 10:05:16 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:39.976 10:05:17 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:40.233 10:05:17 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:43.620 10:05:20 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:43.620 10:05:20 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.620 10:05:20 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:43.620 10:05:20 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88004 00:24:43.620 10:05:20 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88004 00:24:44.664 0 00:24:44.664 10:05:21 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:44.664 [2024-05-15 10:05:14.242182] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:24:44.664 [2024-05-15 10:05:14.242320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87862 ] 00:24:44.664 [2024-05-15 10:05:14.378806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.664 [2024-05-15 10:05:14.542800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.664 [2024-05-15 10:05:17.407958] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:44.664 [2024-05-15 10:05:17.408127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.664 [2024-05-15 10:05:17.408163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.664 [2024-05-15 10:05:17.408184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.664 [2024-05-15 10:05:17.408199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.664 [2024-05-15 10:05:17.408215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.664 [2024-05-15 10:05:17.408231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.664 [2024-05-15 10:05:17.408248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:44.664 [2024-05-15 10:05:17.408263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:44.664 [2024-05-15 10:05:17.408280] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.664 [2024-05-15 10:05:17.408338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.664 [2024-05-15 10:05:17.408370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf805f0 (9): Bad file descriptor 00:24:44.664 [2024-05-15 10:05:17.416913] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:44.664 Running I/O for 1 seconds... 00:24:44.664 00:24:44.664 Latency(us) 00:24:44.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.664 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.664 Verification LBA range: start 0x0 length 0x4000 00:24:44.664 NVMe0n1 : 1.01 7663.61 29.94 0.00 0.00 16621.08 2231.34 23717.79 00:24:44.664 =================================================================================================================== 00:24:44.664 Total : 7663.61 29.94 0.00 0.00 16621.08 2231.34 23717.79 00:24:44.664 10:05:21 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:44.664 10:05:21 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.922 10:05:22 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:45.180 10:05:22 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.180 10:05:22 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:45.438 10:05:22 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:45.696 10:05:22 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 87862 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 87862 ']' 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 87862 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 87862 00:24:48.976 killing process with pid 87862 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 87862' 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 87862 00:24:48.976 10:05:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 87862 00:24:49.543 10:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:49.543 10:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.802 rmmod nvme_tcp 00:24:49.802 rmmod nvme_fabrics 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.802 10:05:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87485 ']' 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87485 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 87485 ']' 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 87485 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 87485 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:24:49.802 killing process with pid 87485 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 87485' 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 87485 00:24:49.802 [2024-05-15 10:05:27.032855] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:49.802 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 87485 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:50.375 00:24:50.375 real 0m34.583s 00:24:50.375 user 2m13.014s 00:24:50.375 sys 0m6.302s 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:50.375 10:05:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.375 ************************************ 00:24:50.375 END TEST nvmf_failover 00:24:50.375 ************************************ 00:24:50.375 10:05:27 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:50.375 10:05:27 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:50.375 10:05:27 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:50.375 10:05:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.375 ************************************ 00:24:50.375 START TEST nvmf_host_discovery 00:24:50.375 ************************************ 00:24:50.375 10:05:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:50.375 * Looking for test storage... 00:24:50.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:50.375 10:05:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:50.375 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:50.375 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.375 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.375 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.375 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.375 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:50.376 Cannot find device "nvmf_tgt_br" 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:24:50.376 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:50.635 Cannot find device "nvmf_tgt_br2" 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:50.635 Cannot find device "nvmf_tgt_br" 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:50.635 Cannot find device "nvmf_tgt_br2" 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:50.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:50.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:24:50.635 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:50.636 10:05:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:50.636 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:50.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:24:50.903 00:24:50.903 --- 10.0.0.2 ping statistics --- 00:24:50.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.903 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:50.903 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:50.903 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:24:50.903 00:24:50.903 --- 10.0.0.3 ping statistics --- 00:24:50.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.903 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:50.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:24:50.903 00:24:50.903 --- 10.0.0.1 ping statistics --- 00:24:50.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.903 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88308 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88308 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 88308 ']' 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:50.903 10:05:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.903 [2024-05-15 10:05:28.247283] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:24:50.903 [2024-05-15 10:05:28.247745] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:51.161 [2024-05-15 10:05:28.407032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.419 [2024-05-15 10:05:28.583188] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.419 [2024-05-15 10:05:28.583528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.419 [2024-05-15 10:05:28.583676] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.419 [2024-05-15 10:05:28.583825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.419 [2024-05-15 10:05:28.583868] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.419 [2024-05-15 10:05:28.583996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.984 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:51.984 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:24:51.984 10:05:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.984 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:51.984 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.243 [2024-05-15 10:05:29.384783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.243 [2024-05-15 10:05:29.396705] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:52.243 [2024-05-15 10:05:29.397168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.243 null0 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.243 null1 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88365 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88365 /tmp/host.sock 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 88365 ']' 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:52.243 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:52.243 10:05:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:52.243 [2024-05-15 10:05:29.492518] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:24:52.243 [2024-05-15 10:05:29.492911] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88365 ] 00:24:52.525 [2024-05-15 10:05:29.642269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.525 [2024-05-15 10:05:29.820466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 [2024-05-15 10:05:30.797353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.460 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:53.719 10:05:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:53.719 10:05:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:24:53.719 10:05:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:24:54.285 [2024-05-15 10:05:31.528078] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:54.285 [2024-05-15 10:05:31.528145] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:54.285 [2024-05-15 10:05:31.528164] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:54.285 [2024-05-15 10:05:31.616278] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:54.541 [2024-05-15 10:05:31.679771] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:54.541 [2024-05-15 10:05:31.679830] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:54.800 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.059 [2024-05-15 10:05:32.331493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:55.059 [2024-05-15 10:05:32.332524] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:55.059 [2024-05-15 10:05:32.332566] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:55.059 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:55.060 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:24:55.060 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.060 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.060 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.060 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.060 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.060 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.060 [2024-05-15 10:05:32.418324] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:55.060 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:24:55.317 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:55.318 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:55.318 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.318 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:55.318 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:55.318 10:05:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:55.318 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:55.318 [2024-05-15 10:05:32.481364] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:55.318 [2024-05-15 10:05:32.481393] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:55.318 [2024-05-15 10:05:32.481403] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:55.318 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:55.318 10:05:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.315 [2024-05-15 10:05:33.632825] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:56.315 [2024-05-15 10:05:33.632868] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.315 [2024-05-15 10:05:33.641448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.315 [2024-05-15 10:05:33.641488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.315 [2024-05-15 10:05:33.641505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.315 [2024-05-15 10:05:33.641516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.315 [2024-05-15 10:05:33.641528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.315 [2024-05-15 10:05:33.641539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.315 [2024-05-15 10:05:33.641552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.315 [2024-05-15 10:05:33.641563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.315 [2024-05-15 10:05:33.641574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7390 is same with the state(5) to be set 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.315 [2024-05-15 10:05:33.651376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7390 (9): Bad file descriptor 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.315 [2024-05-15 10:05:33.661398] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:56.315 [2024-05-15 10:05:33.661584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-05-15 10:05:33.661631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-05-15 10:05:33.661647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7390 with addr=10.0.0.2, port=4420 00:24:56.315 [2024-05-15 10:05:33.661661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7390 is same with the state(5) to be set 00:24:56.315 [2024-05-15 10:05:33.661681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7390 (9): Bad file descriptor 00:24:56.315 [2024-05-15 10:05:33.661707] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.315 [2024-05-15 10:05:33.661719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.315 [2024-05-15 10:05:33.661733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.315 [2024-05-15 10:05:33.661750] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.315 [2024-05-15 10:05:33.671491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:56.315 [2024-05-15 10:05:33.671652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-05-15 10:05:33.671697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-05-15 10:05:33.671712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7390 with addr=10.0.0.2, port=4420 00:24:56.315 [2024-05-15 10:05:33.671725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7390 is same with the state(5) to be set 00:24:56.315 [2024-05-15 10:05:33.671744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7390 (9): Bad file descriptor 00:24:56.315 [2024-05-15 10:05:33.671769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.315 [2024-05-15 10:05:33.671780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.315 [2024-05-15 10:05:33.671792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.315 [2024-05-15 10:05:33.671806] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.315 [2024-05-15 10:05:33.681585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:56.315 [2024-05-15 10:05:33.681737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-05-15 10:05:33.681784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-05-15 10:05:33.681800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7390 with addr=10.0.0.2, port=4420 00:24:56.315 [2024-05-15 10:05:33.681814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7390 is same with the state(5) to be set 00:24:56.315 [2024-05-15 10:05:33.681833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7390 (9): Bad file descriptor 00:24:56.315 [2024-05-15 10:05:33.681867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.315 [2024-05-15 10:05:33.681879] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.315 [2024-05-15 10:05:33.681892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.315 [2024-05-15 10:05:33.681907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.315 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:56.315 [2024-05-15 10:05:33.691823] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:56.315 [2024-05-15 10:05:33.691954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.315 [2024-05-15 10:05:33.691999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.316 [2024-05-15 10:05:33.692014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7390 with addr=10.0.0.2, port=4420 00:24:56.316 [2024-05-15 10:05:33.692028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7390 is same with the state(5) to be set 00:24:56.316 [2024-05-15 10:05:33.692055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7390 (9): Bad file descriptor 00:24:56.316 [2024-05-15 10:05:33.692072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.316 [2024-05-15 10:05:33.692083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.316 [2024-05-15 10:05:33.692110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.316 [2024-05-15 10:05:33.692126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.316 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:24:56.316 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.316 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.316 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.316 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.316 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.316 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.574 [2024-05-15 10:05:33.701900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:56.574 [2024-05-15 10:05:33.702021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.574 [2024-05-15 10:05:33.702066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.574 [2024-05-15 10:05:33.702080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7390 with addr=10.0.0.2, port=4420 00:24:56.574 [2024-05-15 10:05:33.702104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7390 is same with the state(5) to be set 00:24:56.574 [2024-05-15 10:05:33.702123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7390 (9): Bad file descriptor 00:24:56.575 [2024-05-15 10:05:33.702145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.575 [2024-05-15 10:05:33.702156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.575 [2024-05-15 10:05:33.702168] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.575 [2024-05-15 10:05:33.702186] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.575 [2024-05-15 10:05:33.711967] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:56.575 [2024-05-15 10:05:33.712107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.575 [2024-05-15 10:05:33.712154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.575 [2024-05-15 10:05:33.712170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd7390 with addr=10.0.0.2, port=4420 00:24:56.575 [2024-05-15 10:05:33.712183] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd7390 is same with the state(5) to be set 00:24:56.575 [2024-05-15 10:05:33.712202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd7390 (9): Bad file descriptor 00:24:56.575 [2024-05-15 10:05:33.712218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.575 [2024-05-15 10:05:33.712229] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.575 [2024-05-15 10:05:33.712241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.575 [2024-05-15 10:05:33.712256] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.575 [2024-05-15 10:05:33.718854] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:56.575 [2024-05-15 10:05:33.718886] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.575 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.833 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:56.833 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:56.833 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:56.833 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:24:56.833 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:24:56.833 10:05:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:56.833 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:56.833 10:05:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.768 [2024-05-15 10:05:35.008526] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:57.768 [2024-05-15 10:05:35.008577] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:57.768 [2024-05-15 10:05:35.008597] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:57.768 [2024-05-15 10:05:35.095575] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:58.027 [2024-05-15 10:05:35.155786] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:58.027 [2024-05-15 10:05:35.155874] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.027 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.027 2024/05/15 10:05:35 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:58.027 request: 00:24:58.027 { 00:24:58.027 "method": "bdev_nvme_start_discovery", 00:24:58.027 "params": { 00:24:58.028 "name": "nvme", 00:24:58.028 "trtype": "tcp", 00:24:58.028 "traddr": "10.0.0.2", 00:24:58.028 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:58.028 "adrfam": "ipv4", 00:24:58.028 "trsvcid": "8009", 00:24:58.028 "wait_for_attach": true 00:24:58.028 } 00:24:58.028 } 00:24:58.028 Got JSON-RPC error response 00:24:58.028 GoRPCClient: error on JSON-RPC call 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.028 2024/05/15 10:05:35 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:58.028 request: 00:24:58.028 { 00:24:58.028 "method": "bdev_nvme_start_discovery", 00:24:58.028 "params": { 00:24:58.028 "name": "nvme_second", 00:24:58.028 "trtype": "tcp", 00:24:58.028 "traddr": "10.0.0.2", 00:24:58.028 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:58.028 "adrfam": "ipv4", 00:24:58.028 "trsvcid": "8009", 00:24:58.028 "wait_for_attach": true 00:24:58.028 } 00:24:58.028 } 00:24:58.028 Got JSON-RPC error response 00:24:58.028 GoRPCClient: error on JSON-RPC call 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:58.028 10:05:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.032 [2024-05-15 10:05:36.406150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.032 [2024-05-15 10:05:36.406274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.032 [2024-05-15 10:05:36.406294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceeef0 with addr=10.0.0.2, port=8010 00:24:59.032 [2024-05-15 10:05:36.406324] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:59.032 [2024-05-15 10:05:36.406337] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:59.032 [2024-05-15 10:05:36.406349] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:00.408 [2024-05-15 10:05:37.406119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.408 [2024-05-15 10:05:37.406220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.408 [2024-05-15 10:05:37.406237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ceeef0 with addr=10.0.0.2, port=8010 00:25:00.408 [2024-05-15 10:05:37.406267] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:00.408 [2024-05-15 10:05:37.406279] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:00.408 [2024-05-15 10:05:37.406290] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:01.023 [2024-05-15 10:05:38.405957] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:01.281 2024/05/15 10:05:38 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:25:01.281 request: 00:25:01.281 { 00:25:01.281 "method": "bdev_nvme_start_discovery", 00:25:01.281 "params": { 00:25:01.281 "name": "nvme_second", 00:25:01.281 "trtype": "tcp", 00:25:01.281 "traddr": "10.0.0.2", 00:25:01.281 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:01.281 "adrfam": "ipv4", 00:25:01.281 "trsvcid": "8010", 00:25:01.281 "attach_timeout_ms": 3000 00:25:01.281 } 00:25:01.281 } 00:25:01.281 Got JSON-RPC error response 00:25:01.281 GoRPCClient: error on JSON-RPC call 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88365 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.281 rmmod nvme_tcp 00:25:01.281 rmmod nvme_fabrics 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88308 ']' 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88308 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 88308 ']' 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 88308 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 88308 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:01.281 killing process with pid 88308 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 88308' 00:25:01.281 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 88308 00:25:01.282 [2024-05-15 10:05:38.566352] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:01.282 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 88308 00:25:01.849 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.849 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.849 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.849 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.849 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.849 10:05:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.849 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.849 10:05:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:01.849 00:25:01.849 real 0m11.490s 00:25:01.849 user 0m21.391s 00:25:01.849 sys 0m2.395s 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:01.849 ************************************ 00:25:01.849 END TEST nvmf_host_discovery 00:25:01.849 ************************************ 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 10:05:39 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:01.849 10:05:39 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:01.849 10:05:39 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:01.849 10:05:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:01.849 ************************************ 00:25:01.849 START TEST nvmf_host_multipath_status 00:25:01.849 ************************************ 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:01.849 * Looking for test storage... 00:25:01.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.849 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:02.127 Cannot find device "nvmf_tgt_br" 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:02.127 Cannot find device "nvmf_tgt_br2" 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:02.127 Cannot find device "nvmf_tgt_br" 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:02.127 Cannot find device "nvmf_tgt_br2" 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:02.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:02.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:02.127 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:02.128 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:02.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:25:02.386 00:25:02.386 --- 10.0.0.2 ping statistics --- 00:25:02.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.386 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:02.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:02.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:25:02.386 00:25:02.386 --- 10.0.0.3 ping statistics --- 00:25:02.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.386 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:02.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:25:02.386 00:25:02.386 --- 10.0.0.1 ping statistics --- 00:25:02.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.386 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=88848 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 88848 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 88848 ']' 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:02.386 10:05:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:02.386 [2024-05-15 10:05:39.737439] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:25:02.386 [2024-05-15 10:05:39.737546] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.644 [2024-05-15 10:05:39.889331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:02.901 [2024-05-15 10:05:40.064187] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.901 [2024-05-15 10:05:40.064254] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.901 [2024-05-15 10:05:40.064270] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.901 [2024-05-15 10:05:40.064284] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.901 [2024-05-15 10:05:40.064296] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.901 [2024-05-15 10:05:40.065369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.901 [2024-05-15 10:05:40.065376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.466 10:05:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:03.466 10:05:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:25:03.466 10:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:03.466 10:05:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:03.466 10:05:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:03.724 10:05:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.724 10:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88848 00:25:03.724 10:05:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:03.982 [2024-05-15 10:05:41.157010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.982 10:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:04.239 Malloc0 00:25:04.239 10:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:04.497 10:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.755 10:05:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.013 [2024-05-15 10:05:42.217324] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:05.013 [2024-05-15 10:05:42.218217] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.013 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:05.272 [2024-05-15 10:05:42.533800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=88948 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 88948 /var/tmp/bdevperf.sock 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 88948 ']' 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:05.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:05.272 10:05:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:06.646 10:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:06.646 10:05:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:25:06.646 10:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:06.646 10:05:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:07.212 Nvme0n1 00:25:07.212 10:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:07.469 Nvme0n1 00:25:07.469 10:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:07.469 10:05:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:10.000 10:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:10.000 10:05:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:10.000 10:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:10.259 10:05:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:11.194 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:11.194 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:11.194 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.194 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.452 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.452 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:11.452 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.452 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:11.709 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.709 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:11.709 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.709 10:05:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.967 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.967 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.967 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.967 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:12.226 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.226 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:12.226 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.226 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:12.487 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.487 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:12.487 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:12.487 10:05:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.746 10:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.746 10:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:12.746 10:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:13.007 10:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:13.574 10:05:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:14.510 10:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:14.510 10:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:14.510 10:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.510 10:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:14.769 10:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.769 10:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:14.769 10:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.769 10:05:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.027 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.027 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.027 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.027 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:15.286 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.286 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:15.286 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.286 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:15.544 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.544 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:15.544 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.544 10:05:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:15.801 10:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.801 10:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:15.801 10:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:15.801 10:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.367 10:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.367 10:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:16.367 10:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:16.626 10:05:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:16.884 10:05:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:17.817 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:17.817 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:17.817 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.817 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.076 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.076 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:18.076 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.076 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.334 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.334 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.334 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.334 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.593 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.593 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.593 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.593 10:05:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.852 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.852 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.852 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.852 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.493 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.493 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:19.493 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.493 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.493 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.493 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:19.493 10:05:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:19.751 10:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:20.009 10:05:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:21.384 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:21.384 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:21.384 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.384 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.384 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.384 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:21.384 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.384 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.707 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.707 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.707 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.707 10:05:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.966 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.966 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.966 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.966 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:22.226 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.226 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:22.226 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.226 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.485 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.485 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:22.485 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.485 10:05:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.051 10:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.051 10:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:23.051 10:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:23.051 10:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:23.616 10:06:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:24.550 10:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:24.550 10:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:24.550 10:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.550 10:06:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.808 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.808 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:24.808 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.808 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:25.067 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:25.067 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:25.067 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.067 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:25.326 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.326 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:25.326 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:25.326 10:06:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:25.892 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.892 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:25.893 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:25.893 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.151 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.151 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:26.151 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.151 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:26.410 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:26.410 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:26.410 10:06:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:26.668 10:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:27.235 10:06:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:28.172 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:28.172 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:28.172 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.172 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:28.431 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:28.431 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:28.431 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:28.431 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.689 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.689 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:28.689 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.689 10:06:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:28.947 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.947 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:28.947 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.947 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:29.512 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.512 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:29.512 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:29.512 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.770 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.770 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:29.770 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.770 10:06:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.028 10:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.028 10:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:30.594 10:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:30.594 10:06:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:30.853 10:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:31.111 10:06:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:32.048 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:32.048 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:32.048 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.048 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.306 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.306 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:32.306 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.306 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.565 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.565 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.566 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.566 10:06:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.824 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.824 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.824 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.824 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:33.390 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.390 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:33.390 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:33.390 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.648 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.648 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:33.648 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.648 10:06:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.907 10:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.907 10:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:33.907 10:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:34.165 10:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:34.423 10:06:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:35.357 10:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:35.357 10:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:35.357 10:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.357 10:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:35.616 10:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.616 10:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:35.616 10:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.616 10:06:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.180 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.180 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:36.180 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.180 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:36.438 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.438 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:36.438 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.438 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:36.696 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.696 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:36.696 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.696 10:06:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:36.954 10:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.954 10:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:36.954 10:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.954 10:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:37.211 10:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.211 10:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:37.211 10:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:37.468 10:06:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:37.735 10:06:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:38.690 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:38.690 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:38.690 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.690 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:38.949 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.949 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:38.949 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.949 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.523 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.523 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.523 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.523 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.782 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.782 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.782 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:39.782 10:06:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.052 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.052 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.052 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.052 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.366 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.366 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:40.366 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.366 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.646 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.646 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:40.646 10:06:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:40.976 10:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:41.254 10:06:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:42.189 10:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:42.189 10:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:42.189 10:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.189 10:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.447 10:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.447 10:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:42.447 10:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.447 10:06:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.013 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.013 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.013 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.013 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.272 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.272 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.272 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.272 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.531 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.531 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.531 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:43.531 10:06:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 88948 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 88948 ']' 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 88948 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 88948 00:25:44.097 killing process with pid 88948 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 88948' 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 88948 00:25:44.097 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 88948 00:25:44.354 Connection closed with partial response: 00:25:44.354 00:25:44.354 00:25:44.665 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 88948 00:25:44.665 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:44.665 [2024-05-15 10:05:42.637716] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:25:44.665 [2024-05-15 10:05:42.637899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88948 ] 00:25:44.665 [2024-05-15 10:05:42.789038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.665 [2024-05-15 10:05:42.980139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.665 Running I/O for 90 seconds... 00:25:44.665 [2024-05-15 10:06:00.372842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.372942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.372981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.372998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.373980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.373996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.665 [2024-05-15 10:06:00.374542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.665 [2024-05-15 10:06:00.374582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.665 [2024-05-15 10:06:00.374622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.665 [2024-05-15 10:06:00.374661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.665 [2024-05-15 10:06:00.374684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.665 [2024-05-15 10:06:00.374701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.374723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.666 [2024-05-15 10:06:00.374739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.374761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.666 [2024-05-15 10:06:00.374777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.374799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.666 [2024-05-15 10:06:00.374816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.374838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.666 [2024-05-15 10:06:00.374854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.374876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.666 [2024-05-15 10:06:00.374892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.374915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.374931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.374953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.374969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.374998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.375975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.375998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.376020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.376037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.376060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.376076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.376110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.376127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.376151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.376167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.376190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.376206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.666 [2024-05-15 10:06:00.376229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.666 [2024-05-15 10:06:00.376245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.376884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.376909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.376936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.376952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.376976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.376993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.377970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.377986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.378009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.378025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.378048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.378064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.378087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.378105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.667 [2024-05-15 10:06:00.378136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.667 [2024-05-15 10:06:00.378160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.668 [2024-05-15 10:06:00.378199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.668 [2024-05-15 10:06:00.378238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.668 [2024-05-15 10:06:00.378278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.668 [2024-05-15 10:06:00.378317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.668 [2024-05-15 10:06:00.378356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.668 [2024-05-15 10:06:00.378395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.668 [2024-05-15 10:06:00.378433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.378965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.378981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.379004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.379020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.379043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.379059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.379101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.379119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.379149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.379166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.379189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.379205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.379228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.668 [2024-05-15 10:06:00.379244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.668 [2024-05-15 10:06:00.379267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.669 [2024-05-15 10:06:00.379283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.669 [2024-05-15 10:06:00.379321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.669 [2024-05-15 10:06:00.379361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.379400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.379439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.379477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.379516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.379555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.379594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.379622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.379639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.380975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.380991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.381014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.381030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.381052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.669 [2024-05-15 10:06:00.381068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.669 [2024-05-15 10:06:00.381102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.670 [2024-05-15 10:06:00.381439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.670 [2024-05-15 10:06:00.381872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.670 [2024-05-15 10:06:00.381895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.381911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.381934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.381949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.381972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.381988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.382690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.382706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.383421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.383450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.383476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.383504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.383527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.383543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.383565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.383581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.383603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.383619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.383642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.383658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.383681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-05-15 10:06:00.383697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.671 [2024-05-15 10:06:00.383719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.383735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.383757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.383773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.383795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.383812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.383835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.383851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.383874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.383889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.383912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.383927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.383950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.383971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.383994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.672 [2024-05-15 10:06:00.384781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-05-15 10:06:00.384797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.384820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.673 [2024-05-15 10:06:00.384836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.384858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.673 [2024-05-15 10:06:00.384873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.384896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.673 [2024-05-15 10:06:00.384912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.384940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.673 [2024-05-15 10:06:00.384956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.384978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.384994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.673 [2024-05-15 10:06:00.385869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.673 [2024-05-15 10:06:00.385892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.673 [2024-05-15 10:06:00.385914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.385937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.385953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.385976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.385991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.386014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.386030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.386052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.386068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.386099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.386116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.386836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.386861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.386887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.386904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.386927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.386965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.386981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.674 [2024-05-15 10:06:00.387604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.674 [2024-05-15 10:06:00.387648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.674 [2024-05-15 10:06:00.387687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.674 [2024-05-15 10:06:00.387726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.674 [2024-05-15 10:06:00.387765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.674 [2024-05-15 10:06:00.387791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.675 [2024-05-15 10:06:00.387807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.387830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.675 [2024-05-15 10:06:00.387846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.387869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.675 [2024-05-15 10:06:00.387885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.387907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.675 [2024-05-15 10:06:00.387924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.387946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.675 [2024-05-15 10:06:00.387962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.387984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.388699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.388715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.394913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.394955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.394977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.394993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.395016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.395032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.395054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.395070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.395134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.395150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.395173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.395189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.395224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.395241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.395264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.675 [2024-05-15 10:06:00.395282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.675 [2024-05-15 10:06:00.395304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.395320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.395344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.395360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.395410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.395426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.395449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.395465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.395488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.395504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.396969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.396991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.397007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.397030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.397045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.397068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.397084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.397119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.397135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.397158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.397182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.397205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.397221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.397244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.397260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.676 [2024-05-15 10:06:00.397282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.676 [2024-05-15 10:06:00.397298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.397886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.397924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.397963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.397986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.677 [2024-05-15 10:06:00.398795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.398833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.677 [2024-05-15 10:06:00.398855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.677 [2024-05-15 10:06:00.398871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.398893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.398909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.398931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.398946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.398969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.398985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.399728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.399754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.399780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.399797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.399820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.399845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.399869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.399884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.399907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.399923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.399945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.399961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.399984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.399999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.678 [2024-05-15 10:06:00.400879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.400979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.400994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.401017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.401033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.401056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.401071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.401103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.401142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.401158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.678 [2024-05-15 10:06:00.401180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.678 [2024-05-15 10:06:00.401196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.401965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.401981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.402976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.402992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.403014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.403030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.403053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.403069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.403115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.403131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.403154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.403170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.403193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.403209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.403234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.413513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.413585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.413610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.413642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.413666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.413697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.413720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.413752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.413774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.413824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.679 [2024-05-15 10:06:00.413847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.679 [2024-05-15 10:06:00.413878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.413900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.413932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.413954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.413985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.414952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.414983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.415005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.415059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.680 [2024-05-15 10:06:00.415141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.415945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.680 [2024-05-15 10:06:00.415968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.680 [2024-05-15 10:06:00.416000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.416022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.416076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.416142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.416195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.416249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.416303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.416357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.416411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.416464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.416517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.416582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.416615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.416637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.417806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.417844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.417882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.417905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.417937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.417960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.417992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.681 [2024-05-15 10:06:00.418961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.418994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.419016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.419048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.419107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.419140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.681 [2024-05-15 10:06:00.419162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.681 [2024-05-15 10:06:00.419193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.682 [2024-05-15 10:06:00.419215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.682 [2024-05-15 10:06:00.419269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.682 [2024-05-15 10:06:00.419323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.682 [2024-05-15 10:06:00.419376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.682 [2024-05-15 10:06:00.419430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.682 [2024-05-15 10:06:00.419484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.419955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.419977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.420960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.420991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.421013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.421045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.421067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.422016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.422050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.422085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.422124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.422155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.422177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.422208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.422232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.422263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.682 [2024-05-15 10:06:00.422286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.682 [2024-05-15 10:06:00.422318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.422974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.422996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.423975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.423998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.424058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.424126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.424179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.424232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.424286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.683 [2024-05-15 10:06:00.424339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.683 [2024-05-15 10:06:00.424392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.683 [2024-05-15 10:06:00.424446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.683 [2024-05-15 10:06:00.424498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.683 [2024-05-15 10:06:00.424530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.424583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.424636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.424689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.424750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.424803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.424857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.424910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.424963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.424985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.684 [2024-05-15 10:06:00.425591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.425644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.425697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.425729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.425752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.426773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.426808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.426843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.426865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.426896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.426919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.426950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.426972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.684 [2024-05-15 10:06:00.427649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.684 [2024-05-15 10:06:00.427672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.427694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.427717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.427733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.427755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.427771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.427793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.427809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.427831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.427847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.427869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.427885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.427907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.427923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.427945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.427961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.427984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.427999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.428038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.428076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.428126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.428175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.685 [2024-05-15 10:06:00.428219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.685 [2024-05-15 10:06:00.428965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.685 [2024-05-15 10:06:00.428987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.429002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.429024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.429039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.429061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.429076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.429106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.429129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.429168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.429185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.429218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.429251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.429273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.429289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.429312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.429327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.430972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.430987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.431009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.431024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.431047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.431062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.431102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.431118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.431140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.431156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.431177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.431192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.431214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.431229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.686 [2024-05-15 10:06:00.431251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.686 [2024-05-15 10:06:00.431266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.431682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.431719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.431761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.431798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.431835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.431872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.431909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.431946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.431967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.431987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.687 [2024-05-15 10:06:00.432562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.432599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.432621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.432637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.433349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.433374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.433408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.433424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.433446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.433461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.687 [2024-05-15 10:06:00.433483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.687 [2024-05-15 10:06:00.433498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.433974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.433990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.688 [2024-05-15 10:06:00.434565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.434964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.434979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.435001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.688 [2024-05-15 10:06:00.435017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.688 [2024-05-15 10:06:00.435038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.435582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.435597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.689 [2024-05-15 10:06:00.436943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.689 [2024-05-15 10:06:00.436964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.436980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.690 [2024-05-15 10:06:00.437904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.437941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.437963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.437980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.690 [2024-05-15 10:06:00.438480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.690 [2024-05-15 10:06:00.438501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.438516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.438538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.438553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.438575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.438590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.438612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.438627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.438649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.438665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.438686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.438707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.438729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.438744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.438766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.438781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.438803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.438818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.439969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.439991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.691 [2024-05-15 10:06:00.440437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.440473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.440510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.440548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.440585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.440622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.440659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.440696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.691 [2024-05-15 10:06:00.440718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.691 [2024-05-15 10:06:00.440733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.440755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.692 [2024-05-15 10:06:00.440770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.440797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.440813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.440835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.440850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.440871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.440887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.440909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.440924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.440946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.440961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.440982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.440997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.441338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.441353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.451922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.451938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.692 [2024-05-15 10:06:00.452868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.692 [2024-05-15 10:06:00.452884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.452909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.452925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.452951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.452966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.452992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.453980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.453995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.454036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.454077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.454131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.454173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.454226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.693 [2024-05-15 10:06:00.454268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.693 [2024-05-15 10:06:00.454662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.693 [2024-05-15 10:06:00.454679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.454705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.454721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.454748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.454764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.454790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.454806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.454833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.454849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.454875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.454893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.454919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.454935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.454967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.454983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.455010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.455026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.455054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.455070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.455131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.455150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.455181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.455201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.455240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.455265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.455292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.455309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.455350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:00.455370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:00.455598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:00.455621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.459813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.459829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.461310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.461349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.461386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.461423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.461460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.461498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.461549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.461586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.461608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.694 [2024-05-15 10:06:18.461624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.462039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.462063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.462088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.462117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.462140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.462155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.462177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.462192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.694 [2024-05-15 10:06:18.462214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.694 [2024-05-15 10:06:18.462230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.695 [2024-05-15 10:06:18.462251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.695 [2024-05-15 10:06:18.462277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.695 [2024-05-15 10:06:18.462298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.695 [2024-05-15 10:06:18.462313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.695 [2024-05-15 10:06:18.462561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.695 [2024-05-15 10:06:18.462582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.695 Received shutdown signal, test time was about 36.553325 seconds 00:25:44.695 00:25:44.695 Latency(us) 00:25:44.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.695 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:44.695 Verification LBA range: start 0x0 length 0x4000 00:25:44.695 Nvme0n1 : 36.55 9203.01 35.95 0.00 0.00 13880.80 339.38 4122401.65 00:25:44.695 =================================================================================================================== 00:25:44.695 Total : 9203.01 35.95 0.00 0.00 13880.80 339.38 4122401.65 00:25:44.695 10:06:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:44.953 rmmod nvme_tcp 00:25:44.953 rmmod nvme_fabrics 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 88848 ']' 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 88848 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 88848 ']' 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 88848 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 88848 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 88848' 00:25:44.953 killing process with pid 88848 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 88848 00:25:44.953 [2024-05-15 10:06:22.226826] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:44.953 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 88848 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:45.519 00:25:45.519 real 0m43.585s 00:25:45.519 user 2m19.488s 00:25:45.519 sys 0m13.885s 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:45.519 10:06:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:45.519 ************************************ 00:25:45.519 END TEST nvmf_host_multipath_status 00:25:45.519 ************************************ 00:25:45.519 10:06:22 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:45.519 10:06:22 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:45.519 10:06:22 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:45.519 10:06:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.519 ************************************ 00:25:45.519 START TEST nvmf_discovery_remove_ifc 00:25:45.519 ************************************ 00:25:45.519 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:45.519 * Looking for test storage... 00:25:45.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:45.520 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:45.779 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:45.779 Cannot find device "nvmf_tgt_br" 00:25:45.779 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:25:45.779 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:45.779 Cannot find device "nvmf_tgt_br2" 00:25:45.779 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:25:45.779 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:45.779 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:45.779 Cannot find device "nvmf_tgt_br" 00:25:45.780 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:25:45.780 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:45.780 Cannot find device "nvmf_tgt_br2" 00:25:45.780 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:25:45.780 10:06:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:45.780 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:45.780 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:45.780 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:46.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:25:46.038 00:25:46.038 --- 10.0.0.2 ping statistics --- 00:25:46.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.038 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:46.038 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:46.038 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:25:46.038 00:25:46.038 --- 10.0.0.3 ping statistics --- 00:25:46.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.038 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:46.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:25:46.038 00:25:46.038 --- 10.0.0.1 ping statistics --- 00:25:46.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.038 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90282 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90282 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 90282 ']' 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:46.038 10:06:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.038 [2024-05-15 10:06:23.384389] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:25:46.038 [2024-05-15 10:06:23.384494] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:46.295 [2024-05-15 10:06:23.526884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.553 [2024-05-15 10:06:23.705516] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:46.553 [2024-05-15 10:06:23.705596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:46.553 [2024-05-15 10:06:23.705612] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:46.553 [2024-05-15 10:06:23.705626] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:46.553 [2024-05-15 10:06:23.705637] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:46.553 [2024-05-15 10:06:23.705685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:47.120 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.120 [2024-05-15 10:06:24.480638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.120 [2024-05-15 10:06:24.488586] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:47.120 [2024-05-15 10:06:24.488882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:47.120 null0 00:25:47.379 [2024-05-15 10:06:24.521044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90332 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90332 /tmp/host.sock 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 90332 ']' 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:47.379 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:47.379 10:06:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.379 [2024-05-15 10:06:24.608138] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:25:47.379 [2024-05-15 10:06:24.608278] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90332 ] 00:25:47.379 [2024-05-15 10:06:24.755687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.638 [2024-05-15 10:06:24.922730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.572 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:48.572 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:25:48.572 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:48.572 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:48.572 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:48.573 10:06:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.948 [2024-05-15 10:06:26.894079] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:49.948 [2024-05-15 10:06:26.894157] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:49.948 [2024-05-15 10:06:26.894181] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:49.948 [2024-05-15 10:06:26.980883] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:49.948 [2024-05-15 10:06:27.048235] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:49.948 [2024-05-15 10:06:27.048353] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:49.948 [2024-05-15 10:06:27.048386] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:49.948 [2024-05-15 10:06:27.048411] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:49.948 [2024-05-15 10:06:27.048445] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:49.948 [2024-05-15 10:06:27.052492] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ca7820 was disconnected and freed. delete nvme_qpair. 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:49.948 10:06:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:50.911 10:06:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:52.287 10:06:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:53.221 10:06:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:54.281 10:06:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.218 10:06:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:55.218 [2024-05-15 10:06:32.485666] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:55.218 [2024-05-15 10:06:32.485755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.218 [2024-05-15 10:06:32.485776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.218 [2024-05-15 10:06:32.485790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.218 [2024-05-15 10:06:32.485802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.218 [2024-05-15 10:06:32.485815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.218 [2024-05-15 10:06:32.485826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.218 [2024-05-15 10:06:32.485838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.218 [2024-05-15 10:06:32.485849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.218 [2024-05-15 10:06:32.485861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:55.218 [2024-05-15 10:06:32.485872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:55.218 [2024-05-15 10:06:32.485885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71490 is same with the state(5) to be set 00:25:55.218 [2024-05-15 10:06:32.495657] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c71490 (9): Bad file descriptor 00:25:55.218 [2024-05-15 10:06:32.505684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:56.152 10:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.152 10:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.152 10:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.152 10:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.152 10:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.152 10:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.152 10:06:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.409 [2024-05-15 10:06:33.569208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:57.344 [2024-05-15 10:06:34.593222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:57.344 [2024-05-15 10:06:34.593389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c71490 with addr=10.0.0.2, port=4420 00:25:57.344 [2024-05-15 10:06:34.593435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c71490 is same with the state(5) to be set 00:25:57.344 [2024-05-15 10:06:34.594473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c71490 (9): Bad file descriptor 00:25:57.344 [2024-05-15 10:06:34.594542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:57.344 [2024-05-15 10:06:34.594596] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:57.344 [2024-05-15 10:06:34.594672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.344 [2024-05-15 10:06:34.594721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.344 [2024-05-15 10:06:34.594755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.344 [2024-05-15 10:06:34.594783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.344 [2024-05-15 10:06:34.594812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.344 [2024-05-15 10:06:34.594838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.344 [2024-05-15 10:06:34.594866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.344 [2024-05-15 10:06:34.594893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.344 [2024-05-15 10:06:34.594922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:57.344 [2024-05-15 10:06:34.594948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:57.344 [2024-05-15 10:06:34.594976] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:57.344 [2024-05-15 10:06:34.595010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c10280 (9): Bad file descriptor 00:25:57.344 [2024-05-15 10:06:34.595577] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:57.344 [2024-05-15 10:06:34.595612] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:57.344 10:06:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.344 10:06:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:57.344 10:06:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:58.277 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.277 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.277 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.277 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.277 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.277 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.277 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.277 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:58.536 10:06:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.470 [2024-05-15 10:06:36.606055] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:59.470 [2024-05-15 10:06:36.606128] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:59.470 [2024-05-15 10:06:36.606152] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.470 [2024-05-15 10:06:36.694255] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:59.470 [2024-05-15 10:06:36.757046] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:59.470 [2024-05-15 10:06:36.757142] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:59.470 [2024-05-15 10:06:36.757171] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:59.470 [2024-05-15 10:06:36.757194] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:59.470 [2024-05-15 10:06:36.757206] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:59.470 [2024-05-15 10:06:36.764945] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c88130 was disconnected and freed. delete nvme_qpair. 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90332 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 90332 ']' 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 90332 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:59.470 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 90332 00:25:59.730 killing process with pid 90332 00:25:59.730 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:59.730 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:59.730 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 90332' 00:25:59.730 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 90332 00:25:59.730 10:06:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 90332 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:59.988 rmmod nvme_tcp 00:25:59.988 rmmod nvme_fabrics 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:59.988 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90282 ']' 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90282 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 90282 ']' 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 90282 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 90282 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 90282' 00:25:59.989 killing process with pid 90282 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 90282 00:25:59.989 [2024-05-15 10:06:37.371730] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:59.989 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 90282 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:00.556 00:26:00.556 real 0m15.049s 00:26:00.556 user 0m24.989s 00:26:00.556 sys 0m2.523s 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:00.556 ************************************ 00:26:00.556 10:06:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.556 END TEST nvmf_discovery_remove_ifc 00:26:00.556 ************************************ 00:26:00.556 10:06:37 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:00.556 10:06:37 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:00.556 10:06:37 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:00.556 10:06:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:00.556 ************************************ 00:26:00.556 START TEST nvmf_identify_kernel_target 00:26:00.556 ************************************ 00:26:00.556 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:00.556 * Looking for test storage... 00:26:00.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:00.814 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:00.815 10:06:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:00.815 Cannot find device "nvmf_tgt_br" 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:00.815 Cannot find device "nvmf_tgt_br2" 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:00.815 Cannot find device "nvmf_tgt_br" 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:00.815 Cannot find device "nvmf_tgt_br2" 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:00.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:00.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:00.815 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:01.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:26:01.089 00:26:01.089 --- 10.0.0.2 ping statistics --- 00:26:01.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.089 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:01.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:01.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:26:01.089 00:26:01.089 --- 10.0.0.3 ping statistics --- 00:26:01.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.089 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:01.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:26:01.089 00:26:01.089 --- 10.0.0.1 ping statistics --- 00:26:01.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.089 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:01.089 10:06:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:01.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:01.654 Waiting for block devices as requested 00:26:01.654 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:01.654 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:01.913 No valid GPT data, bailing 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:01.913 No valid GPT data, bailing 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:01.913 No valid GPT data, bailing 00:26:01.913 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:02.173 No valid GPT data, bailing 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -a 10.0.0.1 -t tcp -s 4420 00:26:02.173 00:26:02.173 Discovery Log Number of Records 2, Generation counter 2 00:26:02.173 =====Discovery Log Entry 0====== 00:26:02.173 trtype: tcp 00:26:02.173 adrfam: ipv4 00:26:02.173 subtype: current discovery subsystem 00:26:02.173 treq: not specified, sq flow control disable supported 00:26:02.173 portid: 1 00:26:02.173 trsvcid: 4420 00:26:02.173 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:02.173 traddr: 10.0.0.1 00:26:02.173 eflags: none 00:26:02.173 sectype: none 00:26:02.173 =====Discovery Log Entry 1====== 00:26:02.173 trtype: tcp 00:26:02.173 adrfam: ipv4 00:26:02.173 subtype: nvme subsystem 00:26:02.173 treq: not specified, sq flow control disable supported 00:26:02.173 portid: 1 00:26:02.173 trsvcid: 4420 00:26:02.173 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:02.173 traddr: 10.0.0.1 00:26:02.173 eflags: none 00:26:02.173 sectype: none 00:26:02.173 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:02.173 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:02.433 ===================================================== 00:26:02.433 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:02.433 ===================================================== 00:26:02.434 Controller Capabilities/Features 00:26:02.434 ================================ 00:26:02.434 Vendor ID: 0000 00:26:02.434 Subsystem Vendor ID: 0000 00:26:02.434 Serial Number: 774a42790fe08c73fb6f 00:26:02.434 Model Number: Linux 00:26:02.434 Firmware Version: 6.5.12-2 00:26:02.434 Recommended Arb Burst: 0 00:26:02.434 IEEE OUI Identifier: 00 00 00 00:26:02.434 Multi-path I/O 00:26:02.434 May have multiple subsystem ports: No 00:26:02.434 May have multiple controllers: No 00:26:02.434 Associated with SR-IOV VF: No 00:26:02.434 Max Data Transfer Size: Unlimited 00:26:02.434 Max Number of Namespaces: 0 00:26:02.434 Max Number of I/O Queues: 1024 00:26:02.434 NVMe Specification Version (VS): 1.3 00:26:02.434 NVMe Specification Version (Identify): 1.3 00:26:02.434 Maximum Queue Entries: 1024 00:26:02.434 Contiguous Queues Required: No 00:26:02.434 Arbitration Mechanisms Supported 00:26:02.434 Weighted Round Robin: Not Supported 00:26:02.434 Vendor Specific: Not Supported 00:26:02.434 Reset Timeout: 7500 ms 00:26:02.434 Doorbell Stride: 4 bytes 00:26:02.434 NVM Subsystem Reset: Not Supported 00:26:02.434 Command Sets Supported 00:26:02.434 NVM Command Set: Supported 00:26:02.434 Boot Partition: Not Supported 00:26:02.434 Memory Page Size Minimum: 4096 bytes 00:26:02.434 Memory Page Size Maximum: 4096 bytes 00:26:02.434 Persistent Memory Region: Not Supported 00:26:02.434 Optional Asynchronous Events Supported 00:26:02.434 Namespace Attribute Notices: Not Supported 00:26:02.434 Firmware Activation Notices: Not Supported 00:26:02.434 ANA Change Notices: Not Supported 00:26:02.434 PLE Aggregate Log Change Notices: Not Supported 00:26:02.434 LBA Status Info Alert Notices: Not Supported 00:26:02.434 EGE Aggregate Log Change Notices: Not Supported 00:26:02.434 Normal NVM Subsystem Shutdown event: Not Supported 00:26:02.434 Zone Descriptor Change Notices: Not Supported 00:26:02.434 Discovery Log Change Notices: Supported 00:26:02.434 Controller Attributes 00:26:02.434 128-bit Host Identifier: Not Supported 00:26:02.434 Non-Operational Permissive Mode: Not Supported 00:26:02.434 NVM Sets: Not Supported 00:26:02.434 Read Recovery Levels: Not Supported 00:26:02.434 Endurance Groups: Not Supported 00:26:02.434 Predictable Latency Mode: Not Supported 00:26:02.434 Traffic Based Keep ALive: Not Supported 00:26:02.434 Namespace Granularity: Not Supported 00:26:02.434 SQ Associations: Not Supported 00:26:02.434 UUID List: Not Supported 00:26:02.434 Multi-Domain Subsystem: Not Supported 00:26:02.434 Fixed Capacity Management: Not Supported 00:26:02.434 Variable Capacity Management: Not Supported 00:26:02.434 Delete Endurance Group: Not Supported 00:26:02.434 Delete NVM Set: Not Supported 00:26:02.434 Extended LBA Formats Supported: Not Supported 00:26:02.434 Flexible Data Placement Supported: Not Supported 00:26:02.434 00:26:02.434 Controller Memory Buffer Support 00:26:02.434 ================================ 00:26:02.434 Supported: No 00:26:02.434 00:26:02.434 Persistent Memory Region Support 00:26:02.434 ================================ 00:26:02.434 Supported: No 00:26:02.434 00:26:02.434 Admin Command Set Attributes 00:26:02.434 ============================ 00:26:02.434 Security Send/Receive: Not Supported 00:26:02.434 Format NVM: Not Supported 00:26:02.434 Firmware Activate/Download: Not Supported 00:26:02.434 Namespace Management: Not Supported 00:26:02.434 Device Self-Test: Not Supported 00:26:02.434 Directives: Not Supported 00:26:02.434 NVMe-MI: Not Supported 00:26:02.434 Virtualization Management: Not Supported 00:26:02.434 Doorbell Buffer Config: Not Supported 00:26:02.434 Get LBA Status Capability: Not Supported 00:26:02.434 Command & Feature Lockdown Capability: Not Supported 00:26:02.434 Abort Command Limit: 1 00:26:02.434 Async Event Request Limit: 1 00:26:02.434 Number of Firmware Slots: N/A 00:26:02.434 Firmware Slot 1 Read-Only: N/A 00:26:02.434 Firmware Activation Without Reset: N/A 00:26:02.434 Multiple Update Detection Support: N/A 00:26:02.434 Firmware Update Granularity: No Information Provided 00:26:02.434 Per-Namespace SMART Log: No 00:26:02.434 Asymmetric Namespace Access Log Page: Not Supported 00:26:02.434 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:02.434 Command Effects Log Page: Not Supported 00:26:02.434 Get Log Page Extended Data: Supported 00:26:02.434 Telemetry Log Pages: Not Supported 00:26:02.434 Persistent Event Log Pages: Not Supported 00:26:02.434 Supported Log Pages Log Page: May Support 00:26:02.434 Commands Supported & Effects Log Page: Not Supported 00:26:02.434 Feature Identifiers & Effects Log Page:May Support 00:26:02.434 NVMe-MI Commands & Effects Log Page: May Support 00:26:02.434 Data Area 4 for Telemetry Log: Not Supported 00:26:02.434 Error Log Page Entries Supported: 1 00:26:02.434 Keep Alive: Not Supported 00:26:02.434 00:26:02.434 NVM Command Set Attributes 00:26:02.434 ========================== 00:26:02.434 Submission Queue Entry Size 00:26:02.434 Max: 1 00:26:02.434 Min: 1 00:26:02.434 Completion Queue Entry Size 00:26:02.434 Max: 1 00:26:02.434 Min: 1 00:26:02.434 Number of Namespaces: 0 00:26:02.434 Compare Command: Not Supported 00:26:02.434 Write Uncorrectable Command: Not Supported 00:26:02.434 Dataset Management Command: Not Supported 00:26:02.434 Write Zeroes Command: Not Supported 00:26:02.434 Set Features Save Field: Not Supported 00:26:02.434 Reservations: Not Supported 00:26:02.434 Timestamp: Not Supported 00:26:02.434 Copy: Not Supported 00:26:02.434 Volatile Write Cache: Not Present 00:26:02.434 Atomic Write Unit (Normal): 1 00:26:02.434 Atomic Write Unit (PFail): 1 00:26:02.434 Atomic Compare & Write Unit: 1 00:26:02.434 Fused Compare & Write: Not Supported 00:26:02.434 Scatter-Gather List 00:26:02.434 SGL Command Set: Supported 00:26:02.434 SGL Keyed: Not Supported 00:26:02.434 SGL Bit Bucket Descriptor: Not Supported 00:26:02.434 SGL Metadata Pointer: Not Supported 00:26:02.434 Oversized SGL: Not Supported 00:26:02.434 SGL Metadata Address: Not Supported 00:26:02.434 SGL Offset: Supported 00:26:02.434 Transport SGL Data Block: Not Supported 00:26:02.434 Replay Protected Memory Block: Not Supported 00:26:02.434 00:26:02.434 Firmware Slot Information 00:26:02.434 ========================= 00:26:02.434 Active slot: 0 00:26:02.434 00:26:02.434 00:26:02.434 Error Log 00:26:02.434 ========= 00:26:02.434 00:26:02.434 Active Namespaces 00:26:02.434 ================= 00:26:02.434 Discovery Log Page 00:26:02.434 ================== 00:26:02.434 Generation Counter: 2 00:26:02.434 Number of Records: 2 00:26:02.434 Record Format: 0 00:26:02.434 00:26:02.434 Discovery Log Entry 0 00:26:02.434 ---------------------- 00:26:02.434 Transport Type: 3 (TCP) 00:26:02.434 Address Family: 1 (IPv4) 00:26:02.434 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:02.434 Entry Flags: 00:26:02.434 Duplicate Returned Information: 0 00:26:02.434 Explicit Persistent Connection Support for Discovery: 0 00:26:02.434 Transport Requirements: 00:26:02.434 Secure Channel: Not Specified 00:26:02.434 Port ID: 1 (0x0001) 00:26:02.434 Controller ID: 65535 (0xffff) 00:26:02.434 Admin Max SQ Size: 32 00:26:02.434 Transport Service Identifier: 4420 00:26:02.434 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:02.434 Transport Address: 10.0.0.1 00:26:02.434 Discovery Log Entry 1 00:26:02.434 ---------------------- 00:26:02.434 Transport Type: 3 (TCP) 00:26:02.434 Address Family: 1 (IPv4) 00:26:02.434 Subsystem Type: 2 (NVM Subsystem) 00:26:02.434 Entry Flags: 00:26:02.434 Duplicate Returned Information: 0 00:26:02.434 Explicit Persistent Connection Support for Discovery: 0 00:26:02.434 Transport Requirements: 00:26:02.434 Secure Channel: Not Specified 00:26:02.434 Port ID: 1 (0x0001) 00:26:02.434 Controller ID: 65535 (0xffff) 00:26:02.434 Admin Max SQ Size: 32 00:26:02.434 Transport Service Identifier: 4420 00:26:02.434 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:02.434 Transport Address: 10.0.0.1 00:26:02.434 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:02.434 get_feature(0x01) failed 00:26:02.434 get_feature(0x02) failed 00:26:02.434 get_feature(0x04) failed 00:26:02.434 ===================================================== 00:26:02.434 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:02.434 ===================================================== 00:26:02.434 Controller Capabilities/Features 00:26:02.434 ================================ 00:26:02.434 Vendor ID: 0000 00:26:02.434 Subsystem Vendor ID: 0000 00:26:02.434 Serial Number: 29e887f50c61e939cb8b 00:26:02.434 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:02.434 Firmware Version: 6.5.12-2 00:26:02.434 Recommended Arb Burst: 6 00:26:02.434 IEEE OUI Identifier: 00 00 00 00:26:02.434 Multi-path I/O 00:26:02.434 May have multiple subsystem ports: Yes 00:26:02.434 May have multiple controllers: Yes 00:26:02.434 Associated with SR-IOV VF: No 00:26:02.435 Max Data Transfer Size: Unlimited 00:26:02.435 Max Number of Namespaces: 1024 00:26:02.435 Max Number of I/O Queues: 128 00:26:02.435 NVMe Specification Version (VS): 1.3 00:26:02.435 NVMe Specification Version (Identify): 1.3 00:26:02.435 Maximum Queue Entries: 1024 00:26:02.435 Contiguous Queues Required: No 00:26:02.435 Arbitration Mechanisms Supported 00:26:02.435 Weighted Round Robin: Not Supported 00:26:02.435 Vendor Specific: Not Supported 00:26:02.435 Reset Timeout: 7500 ms 00:26:02.435 Doorbell Stride: 4 bytes 00:26:02.435 NVM Subsystem Reset: Not Supported 00:26:02.435 Command Sets Supported 00:26:02.435 NVM Command Set: Supported 00:26:02.435 Boot Partition: Not Supported 00:26:02.435 Memory Page Size Minimum: 4096 bytes 00:26:02.435 Memory Page Size Maximum: 4096 bytes 00:26:02.435 Persistent Memory Region: Not Supported 00:26:02.435 Optional Asynchronous Events Supported 00:26:02.435 Namespace Attribute Notices: Supported 00:26:02.435 Firmware Activation Notices: Not Supported 00:26:02.435 ANA Change Notices: Supported 00:26:02.435 PLE Aggregate Log Change Notices: Not Supported 00:26:02.435 LBA Status Info Alert Notices: Not Supported 00:26:02.435 EGE Aggregate Log Change Notices: Not Supported 00:26:02.435 Normal NVM Subsystem Shutdown event: Not Supported 00:26:02.435 Zone Descriptor Change Notices: Not Supported 00:26:02.435 Discovery Log Change Notices: Not Supported 00:26:02.435 Controller Attributes 00:26:02.435 128-bit Host Identifier: Supported 00:26:02.435 Non-Operational Permissive Mode: Not Supported 00:26:02.435 NVM Sets: Not Supported 00:26:02.435 Read Recovery Levels: Not Supported 00:26:02.435 Endurance Groups: Not Supported 00:26:02.435 Predictable Latency Mode: Not Supported 00:26:02.435 Traffic Based Keep ALive: Supported 00:26:02.435 Namespace Granularity: Not Supported 00:26:02.435 SQ Associations: Not Supported 00:26:02.435 UUID List: Not Supported 00:26:02.435 Multi-Domain Subsystem: Not Supported 00:26:02.435 Fixed Capacity Management: Not Supported 00:26:02.435 Variable Capacity Management: Not Supported 00:26:02.435 Delete Endurance Group: Not Supported 00:26:02.435 Delete NVM Set: Not Supported 00:26:02.435 Extended LBA Formats Supported: Not Supported 00:26:02.435 Flexible Data Placement Supported: Not Supported 00:26:02.435 00:26:02.435 Controller Memory Buffer Support 00:26:02.435 ================================ 00:26:02.435 Supported: No 00:26:02.435 00:26:02.435 Persistent Memory Region Support 00:26:02.435 ================================ 00:26:02.435 Supported: No 00:26:02.435 00:26:02.435 Admin Command Set Attributes 00:26:02.435 ============================ 00:26:02.435 Security Send/Receive: Not Supported 00:26:02.435 Format NVM: Not Supported 00:26:02.435 Firmware Activate/Download: Not Supported 00:26:02.435 Namespace Management: Not Supported 00:26:02.435 Device Self-Test: Not Supported 00:26:02.435 Directives: Not Supported 00:26:02.435 NVMe-MI: Not Supported 00:26:02.435 Virtualization Management: Not Supported 00:26:02.435 Doorbell Buffer Config: Not Supported 00:26:02.435 Get LBA Status Capability: Not Supported 00:26:02.435 Command & Feature Lockdown Capability: Not Supported 00:26:02.435 Abort Command Limit: 4 00:26:02.435 Async Event Request Limit: 4 00:26:02.435 Number of Firmware Slots: N/A 00:26:02.435 Firmware Slot 1 Read-Only: N/A 00:26:02.435 Firmware Activation Without Reset: N/A 00:26:02.435 Multiple Update Detection Support: N/A 00:26:02.435 Firmware Update Granularity: No Information Provided 00:26:02.435 Per-Namespace SMART Log: Yes 00:26:02.435 Asymmetric Namespace Access Log Page: Supported 00:26:02.435 ANA Transition Time : 10 sec 00:26:02.435 00:26:02.435 Asymmetric Namespace Access Capabilities 00:26:02.435 ANA Optimized State : Supported 00:26:02.435 ANA Non-Optimized State : Supported 00:26:02.435 ANA Inaccessible State : Supported 00:26:02.435 ANA Persistent Loss State : Supported 00:26:02.435 ANA Change State : Supported 00:26:02.435 ANAGRPID is not changed : No 00:26:02.435 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:02.435 00:26:02.435 ANA Group Identifier Maximum : 128 00:26:02.435 Number of ANA Group Identifiers : 128 00:26:02.435 Max Number of Allowed Namespaces : 1024 00:26:02.435 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:02.435 Command Effects Log Page: Supported 00:26:02.435 Get Log Page Extended Data: Supported 00:26:02.435 Telemetry Log Pages: Not Supported 00:26:02.435 Persistent Event Log Pages: Not Supported 00:26:02.435 Supported Log Pages Log Page: May Support 00:26:02.435 Commands Supported & Effects Log Page: Not Supported 00:26:02.435 Feature Identifiers & Effects Log Page:May Support 00:26:02.435 NVMe-MI Commands & Effects Log Page: May Support 00:26:02.435 Data Area 4 for Telemetry Log: Not Supported 00:26:02.435 Error Log Page Entries Supported: 128 00:26:02.435 Keep Alive: Supported 00:26:02.435 Keep Alive Granularity: 1000 ms 00:26:02.435 00:26:02.435 NVM Command Set Attributes 00:26:02.435 ========================== 00:26:02.435 Submission Queue Entry Size 00:26:02.435 Max: 64 00:26:02.435 Min: 64 00:26:02.435 Completion Queue Entry Size 00:26:02.435 Max: 16 00:26:02.435 Min: 16 00:26:02.435 Number of Namespaces: 1024 00:26:02.435 Compare Command: Not Supported 00:26:02.435 Write Uncorrectable Command: Not Supported 00:26:02.435 Dataset Management Command: Supported 00:26:02.435 Write Zeroes Command: Supported 00:26:02.435 Set Features Save Field: Not Supported 00:26:02.435 Reservations: Not Supported 00:26:02.435 Timestamp: Not Supported 00:26:02.435 Copy: Not Supported 00:26:02.435 Volatile Write Cache: Present 00:26:02.435 Atomic Write Unit (Normal): 1 00:26:02.435 Atomic Write Unit (PFail): 1 00:26:02.435 Atomic Compare & Write Unit: 1 00:26:02.435 Fused Compare & Write: Not Supported 00:26:02.435 Scatter-Gather List 00:26:02.435 SGL Command Set: Supported 00:26:02.435 SGL Keyed: Not Supported 00:26:02.435 SGL Bit Bucket Descriptor: Not Supported 00:26:02.435 SGL Metadata Pointer: Not Supported 00:26:02.435 Oversized SGL: Not Supported 00:26:02.435 SGL Metadata Address: Not Supported 00:26:02.435 SGL Offset: Supported 00:26:02.435 Transport SGL Data Block: Not Supported 00:26:02.435 Replay Protected Memory Block: Not Supported 00:26:02.435 00:26:02.435 Firmware Slot Information 00:26:02.435 ========================= 00:26:02.435 Active slot: 0 00:26:02.435 00:26:02.435 Asymmetric Namespace Access 00:26:02.435 =========================== 00:26:02.435 Change Count : 0 00:26:02.435 Number of ANA Group Descriptors : 1 00:26:02.435 ANA Group Descriptor : 0 00:26:02.435 ANA Group ID : 1 00:26:02.435 Number of NSID Values : 1 00:26:02.435 Change Count : 0 00:26:02.435 ANA State : 1 00:26:02.435 Namespace Identifier : 1 00:26:02.435 00:26:02.435 Commands Supported and Effects 00:26:02.435 ============================== 00:26:02.435 Admin Commands 00:26:02.435 -------------- 00:26:02.435 Get Log Page (02h): Supported 00:26:02.435 Identify (06h): Supported 00:26:02.435 Abort (08h): Supported 00:26:02.435 Set Features (09h): Supported 00:26:02.435 Get Features (0Ah): Supported 00:26:02.435 Asynchronous Event Request (0Ch): Supported 00:26:02.435 Keep Alive (18h): Supported 00:26:02.435 I/O Commands 00:26:02.435 ------------ 00:26:02.435 Flush (00h): Supported 00:26:02.435 Write (01h): Supported LBA-Change 00:26:02.435 Read (02h): Supported 00:26:02.435 Write Zeroes (08h): Supported LBA-Change 00:26:02.435 Dataset Management (09h): Supported 00:26:02.435 00:26:02.435 Error Log 00:26:02.435 ========= 00:26:02.435 Entry: 0 00:26:02.435 Error Count: 0x3 00:26:02.435 Submission Queue Id: 0x0 00:26:02.435 Command Id: 0x5 00:26:02.435 Phase Bit: 0 00:26:02.435 Status Code: 0x2 00:26:02.435 Status Code Type: 0x0 00:26:02.435 Do Not Retry: 1 00:26:02.695 Error Location: 0x28 00:26:02.695 LBA: 0x0 00:26:02.695 Namespace: 0x0 00:26:02.695 Vendor Log Page: 0x0 00:26:02.695 ----------- 00:26:02.695 Entry: 1 00:26:02.695 Error Count: 0x2 00:26:02.695 Submission Queue Id: 0x0 00:26:02.695 Command Id: 0x5 00:26:02.695 Phase Bit: 0 00:26:02.695 Status Code: 0x2 00:26:02.695 Status Code Type: 0x0 00:26:02.695 Do Not Retry: 1 00:26:02.695 Error Location: 0x28 00:26:02.695 LBA: 0x0 00:26:02.695 Namespace: 0x0 00:26:02.695 Vendor Log Page: 0x0 00:26:02.695 ----------- 00:26:02.695 Entry: 2 00:26:02.695 Error Count: 0x1 00:26:02.695 Submission Queue Id: 0x0 00:26:02.695 Command Id: 0x4 00:26:02.695 Phase Bit: 0 00:26:02.695 Status Code: 0x2 00:26:02.695 Status Code Type: 0x0 00:26:02.695 Do Not Retry: 1 00:26:02.695 Error Location: 0x28 00:26:02.695 LBA: 0x0 00:26:02.695 Namespace: 0x0 00:26:02.695 Vendor Log Page: 0x0 00:26:02.695 00:26:02.695 Number of Queues 00:26:02.695 ================ 00:26:02.695 Number of I/O Submission Queues: 128 00:26:02.695 Number of I/O Completion Queues: 128 00:26:02.695 00:26:02.695 ZNS Specific Controller Data 00:26:02.695 ============================ 00:26:02.695 Zone Append Size Limit: 0 00:26:02.695 00:26:02.695 00:26:02.695 Active Namespaces 00:26:02.695 ================= 00:26:02.695 get_feature(0x05) failed 00:26:02.695 Namespace ID:1 00:26:02.695 Command Set Identifier: NVM (00h) 00:26:02.695 Deallocate: Supported 00:26:02.695 Deallocated/Unwritten Error: Not Supported 00:26:02.695 Deallocated Read Value: Unknown 00:26:02.695 Deallocate in Write Zeroes: Not Supported 00:26:02.695 Deallocated Guard Field: 0xFFFF 00:26:02.695 Flush: Supported 00:26:02.695 Reservation: Not Supported 00:26:02.695 Namespace Sharing Capabilities: Multiple Controllers 00:26:02.695 Size (in LBAs): 1310720 (5GiB) 00:26:02.695 Capacity (in LBAs): 1310720 (5GiB) 00:26:02.695 Utilization (in LBAs): 1310720 (5GiB) 00:26:02.695 UUID: f3569060-6fd9-4e37-a5c2-2a752ace2d34 00:26:02.695 Thin Provisioning: Not Supported 00:26:02.695 Per-NS Atomic Units: Yes 00:26:02.695 Atomic Boundary Size (Normal): 0 00:26:02.695 Atomic Boundary Size (PFail): 0 00:26:02.695 Atomic Boundary Offset: 0 00:26:02.695 NGUID/EUI64 Never Reused: No 00:26:02.695 ANA group ID: 1 00:26:02.695 Namespace Write Protected: No 00:26:02.695 Number of LBA Formats: 1 00:26:02.695 Current LBA Format: LBA Format #00 00:26:02.695 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:26:02.696 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:02.696 rmmod nvme_tcp 00:26:02.696 rmmod nvme_fabrics 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:02.696 10:06:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:02.696 10:06:40 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:03.630 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:03.630 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:03.630 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:03.889 00:26:03.889 real 0m3.187s 00:26:03.889 user 0m1.038s 00:26:03.889 sys 0m1.644s 00:26:03.889 10:06:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:03.889 10:06:41 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:03.889 ************************************ 00:26:03.889 END TEST nvmf_identify_kernel_target 00:26:03.889 ************************************ 00:26:03.889 10:06:41 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:03.889 10:06:41 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:03.889 10:06:41 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:03.889 10:06:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:03.889 ************************************ 00:26:03.889 START TEST nvmf_auth_host 00:26:03.889 ************************************ 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:03.889 * Looking for test storage... 00:26:03.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:03.889 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:03.890 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:03.890 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:03.890 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.890 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:03.890 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:03.890 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:03.890 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:03.890 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:04.148 Cannot find device "nvmf_tgt_br" 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:04.148 Cannot find device "nvmf_tgt_br2" 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:04.148 Cannot find device "nvmf_tgt_br" 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:04.148 Cannot find device "nvmf_tgt_br2" 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:04.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:04.148 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:04.148 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:04.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:26:04.407 00:26:04.407 --- 10.0.0.2 ping statistics --- 00:26:04.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.407 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:04.407 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:04.407 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:26:04.407 00:26:04.407 --- 10.0.0.3 ping statistics --- 00:26:04.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.407 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:04.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:26:04.407 00:26:04.407 --- 10.0.0.1 ping statistics --- 00:26:04.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.407 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91227 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91227 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 91227 ']' 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:04.407 10:06:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5099030d30c465bbde6e8cd819052ba7 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FC4 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5099030d30c465bbde6e8cd819052ba7 0 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5099030d30c465bbde6e8cd819052ba7 0 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5099030d30c465bbde6e8cd819052ba7 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FC4 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FC4 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.FC4 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5adbb196a4a4d7155c0afc69e380e76230c16ee25c160cd13d7372816455fcb3 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.WEu 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5adbb196a4a4d7155c0afc69e380e76230c16ee25c160cd13d7372816455fcb3 3 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5adbb196a4a4d7155c0afc69e380e76230c16ee25c160cd13d7372816455fcb3 3 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5adbb196a4a4d7155c0afc69e380e76230c16ee25c160cd13d7372816455fcb3 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:05.783 10:06:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:05.783 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.WEu 00:26:05.783 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.WEu 00:26:05.783 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.WEu 00:26:05.783 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d31041828889ad5fea78a1ca9d876ee5371600777501b8e7 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xMJ 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d31041828889ad5fea78a1ca9d876ee5371600777501b8e7 0 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d31041828889ad5fea78a1ca9d876ee5371600777501b8e7 0 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d31041828889ad5fea78a1ca9d876ee5371600777501b8e7 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xMJ 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xMJ 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.xMJ 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=040a5268f56a5aa78f805dff354ad868fdca086cdedd13dc 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.SaV 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 040a5268f56a5aa78f805dff354ad868fdca086cdedd13dc 2 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 040a5268f56a5aa78f805dff354ad868fdca086cdedd13dc 2 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=040a5268f56a5aa78f805dff354ad868fdca086cdedd13dc 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.SaV 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.SaV 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.SaV 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3168a5aae6b4601e1ac2e4451ae9dd53 00:26:05.784 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.KEg 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3168a5aae6b4601e1ac2e4451ae9dd53 1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3168a5aae6b4601e1ac2e4451ae9dd53 1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3168a5aae6b4601e1ac2e4451ae9dd53 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.KEg 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.KEg 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.KEg 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fad9ad61feb0614b6754c3e8378cb31a 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LfA 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fad9ad61feb0614b6754c3e8378cb31a 1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fad9ad61feb0614b6754c3e8378cb31a 1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fad9ad61feb0614b6754c3e8378cb31a 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LfA 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LfA 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.LfA 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5871f351d9ff7df13b0a47539a995553d65f796f5a3326bc 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kRS 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5871f351d9ff7df13b0a47539a995553d65f796f5a3326bc 2 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5871f351d9ff7df13b0a47539a995553d65f796f5a3326bc 2 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5871f351d9ff7df13b0a47539a995553d65f796f5a3326bc 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kRS 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kRS 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.kRS 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6dfb7b27b91b650dbea70f72c6243176 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Wiq 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6dfb7b27b91b650dbea70f72c6243176 0 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6dfb7b27b91b650dbea70f72c6243176 0 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6dfb7b27b91b650dbea70f72c6243176 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:06.042 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:06.299 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Wiq 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Wiq 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Wiq 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=736f81ba23ef2e2826ff75bffc211e6850709dc6623e56f8a4ce0220c8d65e03 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xHg 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 736f81ba23ef2e2826ff75bffc211e6850709dc6623e56f8a4ce0220c8d65e03 3 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 736f81ba23ef2e2826ff75bffc211e6850709dc6623e56f8a4ce0220c8d65e03 3 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=736f81ba23ef2e2826ff75bffc211e6850709dc6623e56f8a4ce0220c8d65e03 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xHg 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xHg 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xHg 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91227 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 91227 ']' 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:06.300 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FC4 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.WEu ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WEu 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.xMJ 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.SaV ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SaV 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.KEg 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.LfA ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LfA 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kRS 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Wiq ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Wiq 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xHg 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:06.563 10:06:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:07.136 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:07.136 Waiting for block devices as requested 00:26:07.136 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:07.394 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:07.959 No valid GPT data, bailing 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:07.959 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:08.217 No valid GPT data, bailing 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:08.217 No valid GPT data, bailing 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:08.217 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:08.217 No valid GPT data, bailing 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -a 10.0.0.1 -t tcp -s 4420 00:26:08.475 00:26:08.475 Discovery Log Number of Records 2, Generation counter 2 00:26:08.475 =====Discovery Log Entry 0====== 00:26:08.475 trtype: tcp 00:26:08.475 adrfam: ipv4 00:26:08.475 subtype: current discovery subsystem 00:26:08.475 treq: not specified, sq flow control disable supported 00:26:08.475 portid: 1 00:26:08.475 trsvcid: 4420 00:26:08.475 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:08.475 traddr: 10.0.0.1 00:26:08.475 eflags: none 00:26:08.475 sectype: none 00:26:08.475 =====Discovery Log Entry 1====== 00:26:08.475 trtype: tcp 00:26:08.475 adrfam: ipv4 00:26:08.475 subtype: nvme subsystem 00:26:08.475 treq: not specified, sq flow control disable supported 00:26:08.475 portid: 1 00:26:08.475 trsvcid: 4420 00:26:08.475 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:08.475 traddr: 10.0.0.1 00:26:08.475 eflags: none 00:26:08.475 sectype: none 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.475 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.733 nvme0n1 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.733 10:06:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:08.733 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.734 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.991 nvme0n1 00:26:08.991 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.991 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.992 nvme0n1 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.992 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.250 nvme0n1 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.250 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.251 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.509 nvme0n1 00:26:09.509 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.509 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.509 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.509 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.509 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.510 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.768 nvme0n1 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.768 10:06:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.026 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.027 nvme0n1 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.027 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.285 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.286 nvme0n1 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:10.286 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.545 nvme0n1 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:10.545 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.546 10:06:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.805 nvme0n1 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:10.805 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.063 nvme0n1 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.063 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.658 10:06:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.917 nvme0n1 00:26:11.917 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.917 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.917 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.917 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.917 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.917 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.918 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.176 nvme0n1 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.176 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.435 nvme0n1 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.435 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.436 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.694 nvme0n1 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:12.694 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.695 10:06:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.695 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.952 nvme0n1 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.952 10:06:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:14.853 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.112 nvme0n1 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.112 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.113 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.680 nvme0n1 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.680 10:06:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 nvme0n1 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:15.940 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.507 nvme0n1 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.507 10:06:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.766 nvme0n1 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:16.766 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:16.767 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:16.767 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:16.767 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.334 nvme0n1 00:26:17.334 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.334 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.334 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.334 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.334 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:17.593 10:06:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.160 nvme0n1 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.160 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.161 10:06:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.729 nvme0n1 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:18.729 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.665 nvme0n1 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:19.665 10:06:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.233 nvme0n1 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.233 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.234 nvme0n1 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.234 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.493 nvme0n1 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.493 nvme0n1 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.493 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.752 10:06:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.752 nvme0n1 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:20.752 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.011 nvme0n1 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.011 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 nvme0n1 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 nvme0n1 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.270 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.530 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.531 nvme0n1 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.531 10:06:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.789 nvme0n1 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.789 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:21.790 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.048 nvme0n1 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.048 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.049 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.307 nvme0n1 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.307 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.308 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.566 nvme0n1 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.566 10:06:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.824 nvme0n1 00:26:22.824 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:22.825 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.083 nvme0n1 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.083 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.084 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 nvme0n1 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.342 10:07:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.343 10:07:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.343 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.343 10:07:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.908 nvme0n1 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:23.908 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.166 nvme0n1 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.166 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.732 nvme0n1 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.732 10:07:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.991 nvme0n1 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:24.991 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.557 nvme0n1 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:25.557 10:07:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.125 nvme0n1 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.125 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.691 nvme0n1 00:26:26.691 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.691 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.691 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.691 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.691 10:07:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.691 10:07:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:26.691 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.259 nvme0n1 00:26:27.259 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.259 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.259 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.259 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.259 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:27.518 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:27.519 10:07:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 nvme0n1 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.084 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:28.085 10:07:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.034 nvme0n1 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.034 nvme0n1 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.034 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 nvme0n1 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.035 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 nvme0n1 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.293 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.294 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.552 nvme0n1 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.552 nvme0n1 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.552 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.811 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.812 10:07:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.812 nvme0n1 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:29.812 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.071 nvme0n1 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.071 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.329 nvme0n1 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.329 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.588 nvme0n1 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.588 nvme0n1 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.588 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.846 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.846 10:07:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.846 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.846 10:07:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:30.846 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.847 nvme0n1 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.847 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 nvme0n1 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.367 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.368 nvme0n1 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.368 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.625 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.625 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.625 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.626 nvme0n1 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.626 10:07:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.626 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.626 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.626 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 nvme0n1 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:31.885 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.143 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.144 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.144 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.144 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.144 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.144 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.144 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.144 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.144 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.402 nvme0n1 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:32.402 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.403 10:07:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.970 nvme0n1 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:32.970 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.230 nvme0n1 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.230 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.546 nvme0n1 00:26:33.546 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:33.805 10:07:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:33.805 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.063 nvme0n1 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.063 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA5OTAzMGQzMGM0NjViYmRlNmU4Y2Q4MTkwNTJiYTdevGMv: 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: ]] 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWFkYmIxOTZhNGE0ZDcxNTVjMGFmYzY5ZTM4MGU3NjIzMGMxNmVlMjVjMTYwY2QxM2Q3MzcyODE2NDU1ZmNiM6XAfY0=: 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.322 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.323 10:07:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.889 nvme0n1 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:34.889 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.455 nvme0n1 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzE2OGE1YWFlNmI0NjAxZTFhYzJlNDQ1MWFlOWRkNTPVL9AH: 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZmFkOWFkNjFmZWIwNjE0YjY3NTRjM2U4Mzc4Y2IzMWH/WwCz: 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:35.455 10:07:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.391 nvme0n1 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTg3MWYzNTFkOWZmN2RmMTNiMGE0NzUzOWE5OTU1NTNkNjVmNzk2ZjVhMzMyNmJjJHYw8Q==: 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NmRmYjdiMjdiOTFiNjUwZGJlYTcwZjcyYzYyNDMxNzZY/k0S: 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.391 10:07:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.959 nvme0n1 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzM2ZjgxYmEyM2VmMmUyODI2ZmY3NWJmZmMyMTFlNjg1MDcwOWRjNjYyM2U1NmY4YTRjZTAyMjBjOGQ2NWUwM5L6Jhc=: 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:36.959 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:36.960 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.960 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:36.960 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.526 nvme0n1 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDMxMDQxODI4ODg5YWQ1ZmVhNzhhMWNhOWQ4NzZlZTUzNzE2MDA3Nzc1MDFiOGU39ccVVw==: 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDQwYTUyNjhmNTZhNWFhNzhmODA1ZGZmMzU0YWQ4NjhmZGNhMDg2Y2RlZGQxM2RjtRTs2A==: 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.526 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.785 2024/05/15 10:07:14 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:37.785 request: 00:26:37.785 { 00:26:37.785 "method": "bdev_nvme_attach_controller", 00:26:37.785 "params": { 00:26:37.785 "name": "nvme0", 00:26:37.785 "trtype": "tcp", 00:26:37.785 "traddr": "10.0.0.1", 00:26:37.785 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:37.785 "adrfam": "ipv4", 00:26:37.785 "trsvcid": "4420", 00:26:37.785 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:26:37.785 } 00:26:37.785 } 00:26:37.785 Got JSON-RPC error response 00:26:37.785 GoRPCClient: error on JSON-RPC call 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.785 10:07:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.785 2024/05/15 10:07:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:37.785 request: 00:26:37.785 { 00:26:37.785 "method": "bdev_nvme_attach_controller", 00:26:37.785 "params": { 00:26:37.785 "name": "nvme0", 00:26:37.785 "trtype": "tcp", 00:26:37.785 "traddr": "10.0.0.1", 00:26:37.785 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:37.785 "adrfam": "ipv4", 00:26:37.785 "trsvcid": "4420", 00:26:37.785 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:37.785 "dhchap_key": "key2" 00:26:37.785 } 00:26:37.785 } 00:26:37.785 Got JSON-RPC error response 00:26:37.785 GoRPCClient: error on JSON-RPC call 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.785 2024/05/15 10:07:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:37.785 request: 00:26:37.785 { 00:26:37.785 "method": "bdev_nvme_attach_controller", 00:26:37.785 "params": { 00:26:37.785 "name": "nvme0", 00:26:37.785 "trtype": "tcp", 00:26:37.785 "traddr": "10.0.0.1", 00:26:37.785 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:37.785 "adrfam": "ipv4", 00:26:37.785 "trsvcid": "4420", 00:26:37.785 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:37.785 "dhchap_key": "key1", 00:26:37.785 "dhchap_ctrlr_key": "ckey2" 00:26:37.785 } 00:26:37.785 } 00:26:37.785 Got JSON-RPC error response 00:26:37.785 GoRPCClient: error on JSON-RPC call 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:37.785 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:37.786 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:37.786 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:37.786 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:37.786 rmmod nvme_tcp 00:26:37.786 rmmod nvme_fabrics 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91227 ']' 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91227 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 91227 ']' 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 91227 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 91227 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 91227' 00:26:38.044 killing process with pid 91227 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 91227 00:26:38.044 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 91227 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:38.363 10:07:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:39.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:39.294 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:39.294 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:39.551 10:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.FC4 /tmp/spdk.key-null.xMJ /tmp/spdk.key-sha256.KEg /tmp/spdk.key-sha384.kRS /tmp/spdk.key-sha512.xHg /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:26:39.551 10:07:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:39.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:39.809 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:39.809 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:39.809 00:26:39.809 real 0m36.084s 00:26:39.809 user 0m32.392s 00:26:39.809 sys 0m4.678s 00:26:39.809 10:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:39.809 10:07:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.809 ************************************ 00:26:39.809 END TEST nvmf_auth_host 00:26:39.809 ************************************ 00:26:40.068 10:07:17 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:26:40.068 10:07:17 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:40.068 10:07:17 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:40.068 10:07:17 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:40.068 10:07:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:40.068 ************************************ 00:26:40.068 START TEST nvmf_digest 00:26:40.068 ************************************ 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:40.068 * Looking for test storage... 00:26:40.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:40.068 Cannot find device "nvmf_tgt_br" 00:26:40.068 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:26:40.069 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:40.327 Cannot find device "nvmf_tgt_br2" 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:40.327 Cannot find device "nvmf_tgt_br" 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:40.327 Cannot find device "nvmf_tgt_br2" 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:40.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:40.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:40.327 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:40.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:26:40.585 00:26:40.585 --- 10.0.0.2 ping statistics --- 00:26:40.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.585 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:40.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:40.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:26:40.585 00:26:40.585 --- 10.0.0.3 ping statistics --- 00:26:40.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.585 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:40.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:26:40.585 00:26:40.585 --- 10.0.0.1 ping statistics --- 00:26:40.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.585 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.585 ************************************ 00:26:40.585 START TEST nvmf_digest_clean 00:26:40.585 ************************************ 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # run_digest 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92816 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92816 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 92816 ']' 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:40.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:40.585 10:07:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:40.585 [2024-05-15 10:07:17.889377] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:26:40.585 [2024-05-15 10:07:17.889499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.889 [2024-05-15 10:07:18.034063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.889 [2024-05-15 10:07:18.213642] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.889 [2024-05-15 10:07:18.213727] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.889 [2024-05-15 10:07:18.213743] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.889 [2024-05-15 10:07:18.213757] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.889 [2024-05-15 10:07:18.213770] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.889 [2024-05-15 10:07:18.213809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.822 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:41.822 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:41.823 10:07:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.823 null0 00:26:41.823 [2024-05-15 10:07:19.084784] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.823 [2024-05-15 10:07:19.108710] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:41.823 [2024-05-15 10:07:19.109062] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92875 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92875 /var/tmp/bperf.sock 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 92875 ']' 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:41.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:41.823 10:07:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:41.823 [2024-05-15 10:07:19.165293] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:26:41.823 [2024-05-15 10:07:19.165397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92875 ] 00:26:42.081 [2024-05-15 10:07:19.303908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.340 [2024-05-15 10:07:19.470868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.907 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:42.907 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:26:42.907 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:42.907 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:42.907 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:43.166 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.166 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.423 nvme0n1 00:26:43.681 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:43.681 10:07:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:43.681 Running I/O for 2 seconds... 00:26:46.220 00:26:46.220 Latency(us) 00:26:46.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.220 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:46.220 nvme0n1 : 2.01 22075.71 86.23 0.00 0.00 5790.98 3339.22 14542.75 00:26:46.220 =================================================================================================================== 00:26:46.220 Total : 22075.71 86.23 0.00 0.00 5790.98 3339.22 14542.75 00:26:46.220 0 00:26:46.220 10:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:46.220 10:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:46.220 10:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:46.220 10:07:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:46.220 | select(.opcode=="crc32c") 00:26:46.220 | "\(.module_name) \(.executed)"' 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92875 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 92875 ']' 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 92875 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 92875 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 92875' 00:26:46.220 killing process with pid 92875 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 92875 00:26:46.220 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.220 00:26:46.220 Latency(us) 00:26:46.220 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.220 =================================================================================================================== 00:26:46.220 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.220 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 92875 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92965 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92965 /var/tmp/bperf.sock 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 92965 ']' 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:46.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:46.479 10:07:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.479 [2024-05-15 10:07:23.755259] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:26:46.479 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:46.479 Zero copy mechanism will not be used. 00:26:46.479 [2024-05-15 10:07:23.755420] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92965 ] 00:26:46.738 [2024-05-15 10:07:23.909162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.738 [2024-05-15 10:07:24.085190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.672 10:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:47.672 10:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:26:47.672 10:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:47.672 10:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:47.673 10:07:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:47.930 10:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.930 10:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:48.496 nvme0n1 00:26:48.496 10:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:48.496 10:07:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:48.496 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:48.496 Zero copy mechanism will not be used. 00:26:48.496 Running I/O for 2 seconds... 00:26:51.027 00:26:51.027 Latency(us) 00:26:51.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.027 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:51.027 nvme0n1 : 2.00 6659.52 832.44 0.00 0.00 2398.96 1146.88 9362.29 00:26:51.027 =================================================================================================================== 00:26:51.027 Total : 6659.52 832.44 0.00 0.00 2398.96 1146.88 9362.29 00:26:51.027 0 00:26:51.027 10:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:51.027 10:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:51.027 10:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:51.027 10:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:51.027 10:07:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:51.027 | select(.opcode=="crc32c") 00:26:51.027 | "\(.module_name) \(.executed)"' 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92965 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 92965 ']' 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 92965 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 92965 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 92965' 00:26:51.027 killing process with pid 92965 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 92965 00:26:51.027 Received shutdown signal, test time was about 2.000000 seconds 00:26:51.027 00:26:51.027 Latency(us) 00:26:51.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:51.027 =================================================================================================================== 00:26:51.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:51.027 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 92965 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93061 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93061 /var/tmp/bperf.sock 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 93061 ']' 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:51.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:51.286 10:07:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:51.286 [2024-05-15 10:07:28.614585] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:26:51.286 [2024-05-15 10:07:28.614698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93061 ] 00:26:51.544 [2024-05-15 10:07:28.752013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.544 [2024-05-15 10:07:28.916444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:52.477 10:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:52.477 10:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:26:52.477 10:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:52.477 10:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:52.477 10:07:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:53.043 10:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.043 10:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.301 nvme0n1 00:26:53.301 10:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:53.302 10:07:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:53.302 Running I/O for 2 seconds... 00:26:55.833 00:26:55.833 Latency(us) 00:26:55.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.833 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:55.833 nvme0n1 : 2.00 21941.74 85.71 0.00 0.00 5824.14 3027.14 17226.61 00:26:55.833 =================================================================================================================== 00:26:55.833 Total : 21941.74 85.71 0.00 0.00 5824.14 3027.14 17226.61 00:26:55.833 0 00:26:55.833 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:55.833 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:55.833 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:55.834 | select(.opcode=="crc32c") 00:26:55.834 | "\(.module_name) \(.executed)"' 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93061 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 93061 ']' 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 93061 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:55.834 10:07:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93061 00:26:55.834 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:55.834 killing process with pid 93061 00:26:55.834 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:55.834 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93061' 00:26:55.834 Received shutdown signal, test time was about 2.000000 seconds 00:26:55.834 00:26:55.834 Latency(us) 00:26:55.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.834 =================================================================================================================== 00:26:55.834 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:55.834 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 93061 00:26:55.834 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 93061 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93158 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93158 /var/tmp/bperf.sock 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # '[' -z 93158 ']' 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:56.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:56.092 10:07:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.092 [2024-05-15 10:07:33.452043] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:26:56.092 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.092 Zero copy mechanism will not be used. 00:26:56.092 [2024-05-15 10:07:33.452196] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93158 ] 00:26:56.349 [2024-05-15 10:07:33.597275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.606 [2024-05-15 10:07:33.773763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.173 10:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:57.173 10:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # return 0 00:26:57.173 10:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:57.173 10:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:57.173 10:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:57.740 10:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.740 10:07:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.998 nvme0n1 00:26:57.998 10:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:57.998 10:07:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:58.257 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:58.257 Zero copy mechanism will not be used. 00:26:58.257 Running I/O for 2 seconds... 00:27:00.159 00:27:00.159 Latency(us) 00:27:00.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.159 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:00.159 nvme0n1 : 2.00 8447.06 1055.88 0.00 0.00 1889.96 1521.37 4306.65 00:27:00.159 =================================================================================================================== 00:27:00.159 Total : 8447.06 1055.88 0.00 0.00 1889.96 1521.37 4306.65 00:27:00.159 0 00:27:00.159 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:00.159 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:00.159 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:00.159 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:00.159 | select(.opcode=="crc32c") 00:27:00.159 | "\(.module_name) \(.executed)"' 00:27:00.159 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93158 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 93158 ']' 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 93158 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93158 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93158' 00:27:00.417 killing process with pid 93158 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 93158 00:27:00.417 Received shutdown signal, test time was about 2.000000 seconds 00:27:00.417 00:27:00.417 Latency(us) 00:27:00.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.417 =================================================================================================================== 00:27:00.417 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.417 10:07:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 93158 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92816 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' -z 92816 ']' 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # kill -0 92816 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # uname 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 92816 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 92816' 00:27:00.983 killing process with pid 92816 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # kill 92816 00:27:00.983 [2024-05-15 10:07:38.132768] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:00.983 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # wait 92816 00:27:01.241 ************************************ 00:27:01.241 END TEST nvmf_digest_clean 00:27:01.241 ************************************ 00:27:01.241 00:27:01.241 real 0m20.689s 00:27:01.241 user 0m39.011s 00:27:01.241 sys 0m5.944s 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:01.241 ************************************ 00:27:01.241 START TEST nvmf_digest_error 00:27:01.241 ************************************ 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93277 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93277 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 93277 ']' 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:01.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:01.241 10:07:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.500 [2024-05-15 10:07:38.635357] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:27:01.500 [2024-05-15 10:07:38.635481] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.500 [2024-05-15 10:07:38.772587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.777 [2024-05-15 10:07:38.931107] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.777 [2024-05-15 10:07:38.931205] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.777 [2024-05-15 10:07:38.931225] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.777 [2024-05-15 10:07:38.931243] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.777 [2024-05-15 10:07:38.931258] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.777 [2024-05-15 10:07:38.931308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.362 [2024-05-15 10:07:39.728046] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.362 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.620 null0 00:27:02.620 [2024-05-15 10:07:39.888625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.620 [2024-05-15 10:07:39.912552] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:02.620 [2024-05-15 10:07:39.912958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93321 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93321 /var/tmp/bperf.sock 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 93321 ']' 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:02.620 10:07:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:02.620 [2024-05-15 10:07:39.967958] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:27:02.620 [2024-05-15 10:07:39.968068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93321 ] 00:27:02.879 [2024-05-15 10:07:40.104642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.137 [2024-05-15 10:07:40.276193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.704 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:03.704 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:03.704 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.704 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.963 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:03.963 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:03.963 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.963 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.963 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.963 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.222 nvme0n1 00:27:04.222 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:04.222 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:04.222 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.481 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:04.481 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:04.481 10:07:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.481 Running I/O for 2 seconds... 00:27:04.481 [2024-05-15 10:07:41.723359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.723424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.723439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.734299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.734363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.734378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.745966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.746029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.746045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.756530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.756572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.756585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.766551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.766593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.766605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.777891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.777930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.777942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.789005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.789044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.789056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.798905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.798944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.798957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.809610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.809647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.809659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.820360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.820418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.820432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.831588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.831635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.831648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.842259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.842321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.481 [2024-05-15 10:07:41.842336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.481 [2024-05-15 10:07:41.853309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.481 [2024-05-15 10:07:41.853397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.482 [2024-05-15 10:07:41.853412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.482 [2024-05-15 10:07:41.864775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.482 [2024-05-15 10:07:41.864863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.482 [2024-05-15 10:07:41.864879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.741 [2024-05-15 10:07:41.875909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.741 [2024-05-15 10:07:41.875976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.741 [2024-05-15 10:07:41.875991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.741 [2024-05-15 10:07:41.887270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.741 [2024-05-15 10:07:41.887338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.741 [2024-05-15 10:07:41.887352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.898819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.898879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.898894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.912115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.912168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.912183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.923790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.923843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.923858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.935443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.935495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.935509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.946672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.946714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.946728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.957800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.957852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.957866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.968627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.968671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.968684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.979366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.979408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.979422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:41.990956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:41.991009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:41.991024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.003381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.003446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.003462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.015525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.015588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.015604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.027221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.027280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.027296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.039496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.039561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.039577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.050640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.050716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.050730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.061460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.061501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.061514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.071869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.071910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.071924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.082319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.082359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.082372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.093307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.093347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.093361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.105916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.105958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.105987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:04.742 [2024-05-15 10:07:42.116784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:04.742 [2024-05-15 10:07:42.116825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.742 [2024-05-15 10:07:42.116837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.127838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.127885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.127899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.138532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.138573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.138587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.148990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.149029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.149041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.159745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.159783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.159796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.170258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.170292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.170321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.180799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.180853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.180866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.191811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.191870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.191887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.202469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.202525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.202539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.214033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.214110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.214144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.225559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.225628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.225644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.236891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.236994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.237008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.248841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.248915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.248929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.259538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.259594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.259609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.271533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.271589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.271604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.283460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.283518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.283532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.292754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.292806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.292820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.305573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.305630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.305645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.316968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.317056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.317071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.328166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.328223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.328237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.339541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.339587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.339602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.350495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.002 [2024-05-15 10:07:42.350537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.002 [2024-05-15 10:07:42.350568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.002 [2024-05-15 10:07:42.361406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.003 [2024-05-15 10:07:42.361447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.003 [2024-05-15 10:07:42.361460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.003 [2024-05-15 10:07:42.372350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.003 [2024-05-15 10:07:42.372389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.003 [2024-05-15 10:07:42.372401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.003 [2024-05-15 10:07:42.383021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.003 [2024-05-15 10:07:42.383067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.003 [2024-05-15 10:07:42.383081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.262 [2024-05-15 10:07:42.394214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.262 [2024-05-15 10:07:42.394261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.262 [2024-05-15 10:07:42.394275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.262 [2024-05-15 10:07:42.405354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.262 [2024-05-15 10:07:42.405403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.262 [2024-05-15 10:07:42.405417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.262 [2024-05-15 10:07:42.416728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.262 [2024-05-15 10:07:42.416775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.262 [2024-05-15 10:07:42.416789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.262 [2024-05-15 10:07:42.428535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.262 [2024-05-15 10:07:42.428581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.262 [2024-05-15 10:07:42.428595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.262 [2024-05-15 10:07:42.440877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.262 [2024-05-15 10:07:42.440937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.440950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.451919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.451969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.451983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.462390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.462436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.462450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.472442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.472490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.472504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.484192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.484236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.484250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.494475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.494512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.494524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.505483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.505524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.505536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.515395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.515442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.515455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.528629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.528708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.528721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.539411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.539469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.539484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.549029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.549072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.549085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.560098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.560153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.560168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.570285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.570320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.570331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.581747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.581783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.581794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.592270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.592318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.592329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.602974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.603013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.603027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.613526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.613558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.613570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.624748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.624787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.624800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.263 [2024-05-15 10:07:42.634978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.263 [2024-05-15 10:07:42.635022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.263 [2024-05-15 10:07:42.635035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.523 [2024-05-15 10:07:42.647454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.523 [2024-05-15 10:07:42.647518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-05-15 10:07:42.647533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.523 [2024-05-15 10:07:42.660038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.523 [2024-05-15 10:07:42.660121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-05-15 10:07:42.660137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.523 [2024-05-15 10:07:42.672254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.523 [2024-05-15 10:07:42.672301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-05-15 10:07:42.672315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.523 [2024-05-15 10:07:42.683012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.523 [2024-05-15 10:07:42.683051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.523 [2024-05-15 10:07:42.683065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.523 [2024-05-15 10:07:42.693961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.523 [2024-05-15 10:07:42.694002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.694032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.705778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.705819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.705848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.716431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.716470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.716482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.726595] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.726637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.726666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.738674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.738716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.738729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.749803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.749842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.749855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.760641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.760678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.760691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.773713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.773753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.773765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.784133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.784186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.784201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.795901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.795959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.795976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.808160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.808247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.808263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.819620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.819696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.819712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.831531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.831605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.831621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.842939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.843010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.843027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.853888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.853957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.853972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.865939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.866016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.866032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.876754] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.876793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.876805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.887380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.887447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.887462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.524 [2024-05-15 10:07:42.898683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.524 [2024-05-15 10:07:42.898746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.524 [2024-05-15 10:07:42.898759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.911124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.911229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.911247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.922008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.922077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.922107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.933142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.933181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.933196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.943566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.943609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.943640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.954217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.954296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.954310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.965168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.965212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.965243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.975534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.975573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.975603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.986294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.986328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.986339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:42.997436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:42.997473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:42.997486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.784 [2024-05-15 10:07:43.008597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.784 [2024-05-15 10:07:43.008641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.784 [2024-05-15 10:07:43.008654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.020265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.020308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.020322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.032120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.032199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.032217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.044182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.044257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.044273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.055678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.055754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.055769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.067302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.067378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.067393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.078062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.078153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.078168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.089735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.089814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.089828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.101613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.101698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.101714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.112094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.112183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.112198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.123608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.123673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.123687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.134023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.134065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.134094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.144970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.145012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.145025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:05.785 [2024-05-15 10:07:43.156629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:05.785 [2024-05-15 10:07:43.156705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.785 [2024-05-15 10:07:43.156720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.168917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.168985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.169000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.178538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.178578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.178590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.189075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.189122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.189134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.200959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.201032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.201046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.211713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.211793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.222742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.222785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.222798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.233388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.233423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.233435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.244450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.244487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.244499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.255748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.255797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.255810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.044 [2024-05-15 10:07:43.267498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.044 [2024-05-15 10:07:43.267539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.044 [2024-05-15 10:07:43.267552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.279170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.279224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.279237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.291544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.291584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.291598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.303406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.303476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.303491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.314961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.315006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.315020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.326808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.326851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.326865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.338852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.338896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.338925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.350163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.350205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.350218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.361028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.361070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.361099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.372727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.372773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.372786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.382869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.382905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.382917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.393992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.394042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.394056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.408245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.408304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.408318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.045 [2024-05-15 10:07:43.419440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.045 [2024-05-15 10:07:43.419495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.045 [2024-05-15 10:07:43.419509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.431462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.431512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.431526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.443064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.443118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.443145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.453710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.453752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.453766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.464896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.464942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.464955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.476214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.476270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.476282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.487016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.487055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.487068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.496160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.496221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.496235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.506748] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.506799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.506813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.517396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.517435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.517449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.529280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.529347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.529362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.540223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.540288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.540303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.552681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.552738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.552752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.563951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.564026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.564042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.574122] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.574185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.574199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.584695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.584775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.584789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.596255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.596304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.596318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.607117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.607198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.607213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.617744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.617793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.617805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.627794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.627836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.627850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.637778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.637811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.637823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.648971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.649003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.649014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.659752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.659793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.659807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.671895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.671960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.671975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.305 [2024-05-15 10:07:43.682506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.305 [2024-05-15 10:07:43.682561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.305 [2024-05-15 10:07:43.682575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.564 [2024-05-15 10:07:43.693213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.564 [2024-05-15 10:07:43.693255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.564 [2024-05-15 10:07:43.693269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.564 [2024-05-15 10:07:43.703896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x210e9d0) 00:27:06.564 [2024-05-15 10:07:43.703948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.564 [2024-05-15 10:07:43.703963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:06.564 00:27:06.564 Latency(us) 00:27:06.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.564 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:06.564 nvme0n1 : 2.00 22726.86 88.78 0.00 0.00 5625.50 3417.23 14355.50 00:27:06.564 =================================================================================================================== 00:27:06.564 Total : 22726.86 88.78 0.00 0.00 5625.50 3417.23 14355.50 00:27:06.564 0 00:27:06.564 10:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:06.564 10:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:06.564 10:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.564 10:07:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:06.564 | .driver_specific 00:27:06.564 | .nvme_error 00:27:06.564 | .status_code 00:27:06.564 | .command_transient_transport_error' 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 178 > 0 )) 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93321 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 93321 ']' 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 93321 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93321 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:06.823 killing process with pid 93321 00:27:06.823 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.823 00:27:06.823 Latency(us) 00:27:06.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.823 =================================================================================================================== 00:27:06.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93321' 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 93321 00:27:06.823 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 93321 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93411 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93411 /var/tmp/bperf.sock 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 93411 ']' 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:07.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:07.105 10:07:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.105 [2024-05-15 10:07:44.480419] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:27:07.105 [2024-05-15 10:07:44.480513] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:27:07.105 Zero copy mechanism will not be used. 00:27:07.106 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93411 ] 00:27:07.364 [2024-05-15 10:07:44.617389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.621 [2024-05-15 10:07:44.773994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.187 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:08.187 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:08.187 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.187 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:08.445 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:08.445 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.445 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.445 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.445 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.445 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:08.704 nvme0n1 00:27:08.704 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:08.704 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:08.704 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:08.704 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:08.704 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:08.705 10:07:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:08.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.965 Zero copy mechanism will not be used. 00:27:08.965 Running I/O for 2 seconds... 00:27:08.965 [2024-05-15 10:07:46.123542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.123609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.123624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.127813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.127862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.127877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.132265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.132321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.132347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.136793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.136835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.136848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.141026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.141082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.145174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.145214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.145226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.149517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.149555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.149568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.153472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.153506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.153517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.157502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.157537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.157549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.161735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.161769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.161780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.165716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.965 [2024-05-15 10:07:46.165752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.965 [2024-05-15 10:07:46.165763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.965 [2024-05-15 10:07:46.169811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.169849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.169861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.173950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.173991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.174002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.178282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.178322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.178334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.182844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.182883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.182896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.186992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.187031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.187044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.191417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.191462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.191476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.196216] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.196263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.196277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.200459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.200505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.200518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.204753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.204798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.204811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.209156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.209196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.209209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.213280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.213318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.213330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.217938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.217980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.217993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.222327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.222368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.222381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.226528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.226570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.226583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.230922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.230974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.230986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.234798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.234835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.234847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.238974] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.239013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.239025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.243188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.243228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.243241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.247434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.247474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.247487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.251376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.251415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.251428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.255563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.255602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.255615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.260144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.260192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.260206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.264435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.264477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.264489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.268461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.268503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.268515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.272794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.272835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.272846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.277175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.277210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.277239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.281273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.281310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.966 [2024-05-15 10:07:46.281321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.966 [2024-05-15 10:07:46.285512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.966 [2024-05-15 10:07:46.285548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.285576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.289657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.289692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.289704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.293890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.293937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.298051] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.298102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.298115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.302346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.302385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.302398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.306651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.306688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.306700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.310835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.310874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.310886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.315294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.315331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.315344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.319258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.319318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.319348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.323327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.323363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.323375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.327694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.327733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.327745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.331955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.331999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.332013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.336376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.336417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.336428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.340582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.340624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.340637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:08.967 [2024-05-15 10:07:46.344705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:08.967 [2024-05-15 10:07:46.344746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.967 [2024-05-15 10:07:46.344759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.227 [2024-05-15 10:07:46.348867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.227 [2024-05-15 10:07:46.348904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.227 [2024-05-15 10:07:46.348917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.227 [2024-05-15 10:07:46.353453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.227 [2024-05-15 10:07:46.353489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.227 [2024-05-15 10:07:46.353500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.227 [2024-05-15 10:07:46.357873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.227 [2024-05-15 10:07:46.357912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.227 [2024-05-15 10:07:46.357940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.227 [2024-05-15 10:07:46.362141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.227 [2024-05-15 10:07:46.362177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.227 [2024-05-15 10:07:46.362205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.227 [2024-05-15 10:07:46.366328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.227 [2024-05-15 10:07:46.366363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.227 [2024-05-15 10:07:46.366375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.227 [2024-05-15 10:07:46.370657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.370692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.370703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.374785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.374820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.374832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.378858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.378897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.383258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.383299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.383313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.387296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.387331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.387343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.391228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.391262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.391290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.395370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.395405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.395418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.399324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.399362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.399375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.403532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.403570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.403583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.407448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.407486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.407498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.411538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.411574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.411587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.415775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.415813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.415826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.419966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.420004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.420016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.424259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.424313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.424325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.428486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.428533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.428545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.433209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.433257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.433288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.437384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.437435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.437447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.441586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.441636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.441650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.445943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.445997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.446010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.450629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.450690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.450705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.455482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.455539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.455554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.460152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.460210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.460225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.465131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.465182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.465197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.469988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.470032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.470062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.474391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.474431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.474444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.478972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.479013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.479043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.483481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.483522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.483536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.228 [2024-05-15 10:07:46.487627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.228 [2024-05-15 10:07:46.487668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.228 [2024-05-15 10:07:46.487682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.492281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.492325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.492339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.496776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.496818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.496831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.501389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.501434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.501464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.506253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.506295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.506309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.510782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.510822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.510836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.515230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.515263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.515278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.519633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.519668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.519681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.523946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.523981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.523995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.528413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.528453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.528466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.533161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.533198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.533211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.537508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.537545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.537557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.542100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.542133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.542146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.546571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.546613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.546626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.550989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.551029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.551042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.555540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.555579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.555593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.560100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.560152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.560182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.564520] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.564561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.564574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.569086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.569141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.569155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.573397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.573435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.573448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.578148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.578187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.578201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.582676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.582715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.582728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.586990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.587030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.587042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.591273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.591311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.591324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.595555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.595597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.595610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.600132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.600169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.600183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.604560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.604597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.604610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.229 [2024-05-15 10:07:46.608789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.229 [2024-05-15 10:07:46.608827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.229 [2024-05-15 10:07:46.608839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.613211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.613249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.613262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.617474] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.617512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.617525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.621917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.621952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.621965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.626059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.626106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.626119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.630240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.630281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.630294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.634364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.634402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.634415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.638756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.638795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.638807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.643129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.643199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.643213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.647895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.647951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.647965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.652503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.652560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.652574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.657250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.657334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.657349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.662015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.662083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.662120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.666963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.667024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.667039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.671931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.671979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.671998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.676339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.676382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.676397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.681691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.681740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.681754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.686273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.686315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.686329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.690475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.690518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.690531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.694059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.694115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.694129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.698434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.698476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.698507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.703073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.703125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.703145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.706017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.706056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.706069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.709731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.709772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.709785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.713191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.713231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.713243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.717343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.717383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.490 [2024-05-15 10:07:46.717396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.490 [2024-05-15 10:07:46.721486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.490 [2024-05-15 10:07:46.721523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.721535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.725826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.725865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.725878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.730373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.730412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.730441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.734980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.735023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.735036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.739295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.739338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.739352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.743737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.743786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.743799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.748203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.748242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.748255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.752438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.752478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.752490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.756989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.757033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.757047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.761452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.761492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.761504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.765786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.765823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.765851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.769990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.770026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.770037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.774278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.774314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.774325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.778619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.778655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.778666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.783054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.783101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.783113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.787462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.787505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.787519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.791592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.791636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.791667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.795933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.795980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.795993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.800330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.800374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.800387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.804758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.804801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.804812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.808971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.809012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.809023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.813187] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.491 [2024-05-15 10:07:46.813222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.491 [2024-05-15 10:07:46.813233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.491 [2024-05-15 10:07:46.817561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.817600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.817613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.821849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.821884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.821896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.826226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.826262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.826274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.830202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.830235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.830246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.834567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.834605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.834617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.838643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.838681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.838693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.842664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.842700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.842711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.846608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.846643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.846654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.850701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.850737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.850748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.854602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.854638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.854650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.858731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.858768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.858779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.862876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.862915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.862929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.867225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.867267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.867280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.492 [2024-05-15 10:07:46.871652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.492 [2024-05-15 10:07:46.871692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.492 [2024-05-15 10:07:46.871706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.876164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.876214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.876244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.880551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.880592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.880604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.884884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.884921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.884932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.889358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.889395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.889406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.893587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.893626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.893637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.898166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.898203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.898231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.902477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.902517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.902528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.906828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.906870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.906882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.911568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.911613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.911644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.916076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.916130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.916160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.920056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.920110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.920123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.924489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.924530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.924543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.929240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.929282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.929294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.933654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.933694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.933706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.937832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.937871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.937884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.942291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.942335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.942348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.946506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.946550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.946563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.951005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.951048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.951062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.955606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.955655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.955669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.960079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.960137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.960152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.964504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.964548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.753 [2024-05-15 10:07:46.964561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.753 [2024-05-15 10:07:46.969125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.753 [2024-05-15 10:07:46.969167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:46.969180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:46.974026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:46.974100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:46.974116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:46.978837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:46.978896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:46.978910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:46.983894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:46.983958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:46.983974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:46.988903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:46.988966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:46.988979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:46.993438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:46.993500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:46.993515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:46.998218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:46.998282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:46.998296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.003076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.003161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.003192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.007712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.007766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.007780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.012028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.012072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.012104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.016142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.016180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.016193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.020519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.020559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.020571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.025066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.025117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.025132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.029564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.029602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.029613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.033681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.033719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.033731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.037943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.037979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.037991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.042332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.042368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.042381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.046734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.046771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.046799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.050806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.050843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.050871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.055150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.055187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.055199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.059343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.059384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.059397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.063527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.063565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.063578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.067683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.067721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.067750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.071958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.071996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.754 [2024-05-15 10:07:47.072009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.754 [2024-05-15 10:07:47.076336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.754 [2024-05-15 10:07:47.076373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.076401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.080507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.080548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.080560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.084877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.084915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.084943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.088981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.089021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.089034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.093290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.093327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.093339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.097665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.097704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.097716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.101885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.101926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.101937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.106604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.106649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.106662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.110783] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.110823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.110837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.115669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.115715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.115728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.120308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.120356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.120371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.125375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.125424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.125439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.129899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.129948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.129962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.755 [2024-05-15 10:07:47.134706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:09.755 [2024-05-15 10:07:47.134753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.755 [2024-05-15 10:07:47.134766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.015 [2024-05-15 10:07:47.139653] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.015 [2024-05-15 10:07:47.139725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.015 [2024-05-15 10:07:47.139740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.015 [2024-05-15 10:07:47.143986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.015 [2024-05-15 10:07:47.144030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.015 [2024-05-15 10:07:47.144045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.015 [2024-05-15 10:07:47.148388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.015 [2024-05-15 10:07:47.148428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.015 [2024-05-15 10:07:47.148441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.015 [2024-05-15 10:07:47.152816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.015 [2024-05-15 10:07:47.152857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.015 [2024-05-15 10:07:47.152871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.015 [2024-05-15 10:07:47.157193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.015 [2024-05-15 10:07:47.157234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.015 [2024-05-15 10:07:47.157246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.015 [2024-05-15 10:07:47.161315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.015 [2024-05-15 10:07:47.161353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.015 [2024-05-15 10:07:47.161365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.015 [2024-05-15 10:07:47.165575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.015 [2024-05-15 10:07:47.165616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.015 [2024-05-15 10:07:47.165628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.015 [2024-05-15 10:07:47.169738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.169775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.169803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.174019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.174057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.174068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.177867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.177903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.177915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.181989] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.182041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.182054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.186647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.186688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.186700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.190898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.190936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.190949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.195263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.195301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.195314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.199638] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.199682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.199696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.204243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.204293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.204306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.208701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.208741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.208753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.213246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.213283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.213295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.217663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.217705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.217718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.221995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.222034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.222046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.226177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.226213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.226225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.230282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.230319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.230331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.234351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.234390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.234418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.238204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.238242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.238255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.242215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.242251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.242263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.246599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.246639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.246651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.250871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.250922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.250951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.255227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.255296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.255336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.259665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.259714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.259744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.263794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.263836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.263850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.268233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.268269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.268282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.272646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.272684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.272713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.277302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.277341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.277354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.281511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.281551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.281563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.016 [2024-05-15 10:07:47.285848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.016 [2024-05-15 10:07:47.285886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.016 [2024-05-15 10:07:47.285914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.290214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.290249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.290260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.294469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.294506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.294517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.298870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.298909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.298938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.303054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.303099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.303111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.307276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.307325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.307345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.311839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.311881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.311895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.316228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.316278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.316290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.320627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.320664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.320675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.324932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.324970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.324982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.329301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.329338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.329366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.333844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.333883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.333896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.338037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.338076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.338100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.342406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.342447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.342459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.346468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.346506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.346517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.350751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.350788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.350799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.355007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.355044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.355056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.359459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.359526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.359539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.364576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.364631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.364644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.369230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.369287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.369301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.374088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.374155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.374170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.378553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.378599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.378628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.383375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.383422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.383436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.388006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.388055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.388069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.392736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.392778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.392791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.017 [2024-05-15 10:07:47.397136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.017 [2024-05-15 10:07:47.397178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.017 [2024-05-15 10:07:47.397192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.401616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.401658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.401688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.405962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.406003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.406017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.410622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.410664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.410677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.415229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.415271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.415284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.419577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.419619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.419633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.423868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.423909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.423923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.428373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.428414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.428444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.432786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.432824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.432852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.436850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.436887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.436898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.441244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.278 [2024-05-15 10:07:47.441281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.278 [2024-05-15 10:07:47.441293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.278 [2024-05-15 10:07:47.445623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.445664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.445677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.449885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.449924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.449936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.454295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.454333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.454345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.458711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.458753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.458765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.463293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.463361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.463375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.467732] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.467774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.467787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.472071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.472124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.472137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.476396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.476437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.476450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.480597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.480632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.480643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.484658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.484694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.484706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.489165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.489200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.489212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.493292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.493329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.493357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.497599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.497638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.497650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.501604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.501642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.501655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.505672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.505710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.505722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.509878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.509919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.509931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.514196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.514234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.514246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.518712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.518752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.518781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.523228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.523285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.523301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.528626] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.528708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.528724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.533531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.533596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.533612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.537741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.537784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.537797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.542485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.542530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.542544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.546899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.546943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.546957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.551274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.551318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.551332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.555721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.555765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.555779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.560437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.560480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.279 [2024-05-15 10:07:47.560493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.279 [2024-05-15 10:07:47.565043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.279 [2024-05-15 10:07:47.565107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.565121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.569611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.569655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.569668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.573845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.573888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.573902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.578487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.578534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.578548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.583537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.583587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.583603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.588161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.588230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.588245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.593207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.593287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.593303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.598125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.598194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.598209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.602669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.602739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.602753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.607553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.607616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.607632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.611808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.611853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.611867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.616185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.616226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.616239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.620557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.620599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.620611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.624699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.624738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.624751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.629123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.629161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.629174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.633611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.633651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.633665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.637774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.637812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.637824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.642162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.642199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.642210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.646681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.646723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.646752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.651281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.651325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.651338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.655729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.655770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.655783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.280 [2024-05-15 10:07:47.660206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.280 [2024-05-15 10:07:47.660248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.280 [2024-05-15 10:07:47.660271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.664810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.664850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.664880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.669021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.669062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.669075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.673549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.673604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.673617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.677937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.677980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.677993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.681926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.681965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.681977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.685970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.686005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.686033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.690242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.690280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.690291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.694908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.694949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.694962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.699562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.699605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.699618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.704001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.704043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.704056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.708583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.708626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.708639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.713149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.713189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.713203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.717772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.717814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.717827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.722085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.722132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.722161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.726229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.726269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.726281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.730677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.730719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.730747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.735267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.735307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.735337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.739529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.739572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.739586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.541 [2024-05-15 10:07:47.744283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.541 [2024-05-15 10:07:47.744329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.541 [2024-05-15 10:07:47.744342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.748889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.748936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.748951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.753859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.753907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.753920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.758414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.758462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.758476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.763398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.763457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.763472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.768066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.768145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.768160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.772958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.773021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.773053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.777572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.777643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.777659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.782238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.782300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.782315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.786765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.786840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.786855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.791025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.791071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.791085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.796012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.796058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.796072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.800718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.800760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.800773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.805179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.805219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.805232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.809415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.809457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.809470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.814193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.814242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.814273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.818696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.818737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.818766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.823060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.823127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.823149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.827563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.827608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.827622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.831797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.831840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.831853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.836401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.836441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.836453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.840806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.840847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.840859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.845230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.845270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.845300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.849776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.849818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.849831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.854125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.854159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.854187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.858588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.858629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.858658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.863037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.863076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.863099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.867402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.867442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.867455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.542 [2024-05-15 10:07:47.871687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.542 [2024-05-15 10:07:47.871743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.542 [2024-05-15 10:07:47.871757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.876244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.876285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.876299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.880842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.880882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.880895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.885080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.885127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.885141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.889574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.889614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.889626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.893967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.894007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.894019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.898387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.898426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.898437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.902630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.902667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.902679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.906806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.906847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.906859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.911116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.911163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.911175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.915042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.915078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.915102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.919381] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.919420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.919432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.543 [2024-05-15 10:07:47.923629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.543 [2024-05-15 10:07:47.923670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.543 [2024-05-15 10:07:47.923683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.927886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.927929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.927943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.932133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.932169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.932181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.936694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.936746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.936774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.941171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.941205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.941217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.945226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.945260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.945287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.949417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.949454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.949466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.953637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.953676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.953688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.957862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.957901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.957912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.962053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.962100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.962112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.966075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.966125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.966137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.970323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.970362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.970375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.974913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.974953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.974966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.978991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.979032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.979045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.983443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.983507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.983522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.988137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.988177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.988190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.992379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.992420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.992433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:47.996800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:47.996839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:47.996852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:48.001313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:48.001354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:48.001383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:48.006007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:48.006049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:48.006061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:48.010138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:48.010176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:48.010188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:48.014324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:48.014361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:48.014373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:48.018680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:48.018720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:48.018731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:48.022830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:48.022868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:48.022880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:48.027107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:48.027149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:48.027161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.802 [2024-05-15 10:07:48.031613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.802 [2024-05-15 10:07:48.031656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.802 [2024-05-15 10:07:48.031669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.036023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.036066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.036079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.040315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.040354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.040367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.044636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.044678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.044690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.049055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.049106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.049119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.053608] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.053648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.053678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.058328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.058368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.058381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.062510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.062548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.062576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.066955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.066994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.067006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.071384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.071424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.071437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.075699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.075742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.075756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.080361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.080415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.080429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.084695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.084742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.084772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.089311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.089362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.089375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.093563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.093602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.093614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.097906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.097947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.097959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.102102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.102148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.102160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.106273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.106310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.106322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.110466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.110504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.110516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:10.803 [2024-05-15 10:07:48.114793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc088b0) 00:27:10.803 [2024-05-15 10:07:48.114835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:10.803 [2024-05-15 10:07:48.114848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:10.803 00:27:10.803 Latency(us) 00:27:10.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.803 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:10.803 nvme0n1 : 2.00 7048.15 881.02 0.00 0.00 2266.45 1045.46 8176.40 00:27:10.803 =================================================================================================================== 00:27:10.803 Total : 7048.15 881.02 0.00 0.00 2266.45 1045.46 8176.40 00:27:10.803 0 00:27:10.803 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:10.803 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:10.803 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:10.803 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:10.803 | .driver_specific 00:27:10.803 | .nvme_error 00:27:10.803 | .status_code 00:27:10.803 | .command_transient_transport_error' 00:27:11.368 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 455 > 0 )) 00:27:11.368 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93411 00:27:11.368 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 93411 ']' 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 93411 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93411 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:11.369 killing process with pid 93411 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93411' 00:27:11.369 Received shutdown signal, test time was about 2.000000 seconds 00:27:11.369 00:27:11.369 Latency(us) 00:27:11.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.369 =================================================================================================================== 00:27:11.369 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 93411 00:27:11.369 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 93411 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93503 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93503 /var/tmp/bperf.sock 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 93503 ']' 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:11.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:11.626 10:07:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.627 [2024-05-15 10:07:48.930714] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:27:11.627 [2024-05-15 10:07:48.930836] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93503 ] 00:27:11.885 [2024-05-15 10:07:49.068670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.885 [2024-05-15 10:07:49.226900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.830 10:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:12.830 10:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:12.830 10:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.830 10:07:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:12.830 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:12.830 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:12.830 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:12.830 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:12.830 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.830 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.397 nvme0n1 00:27:13.397 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:13.397 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:13.397 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.397 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:13.397 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:13.397 10:07:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.397 Running I/O for 2 seconds... 00:27:13.397 [2024-05-15 10:07:50.699484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190de8a8 00:27:13.397 [2024-05-15 10:07:50.700195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.397 [2024-05-15 10:07:50.700240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:13.397 [2024-05-15 10:07:50.712615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fda78 00:27:13.397 [2024-05-15 10:07:50.714198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.397 [2024-05-15 10:07:50.714239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.397 [2024-05-15 10:07:50.720520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f7538 00:27:13.397 [2024-05-15 10:07:50.721262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.397 [2024-05-15 10:07:50.721300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:13.397 [2024-05-15 10:07:50.734016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f7538 00:27:13.397 [2024-05-15 10:07:50.735371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.397 [2024-05-15 10:07:50.735426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:13.397 [2024-05-15 10:07:50.744574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5be8 00:27:13.397 [2024-05-15 10:07:50.745563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.397 [2024-05-15 10:07:50.745610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.397 [2024-05-15 10:07:50.755640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fd208 00:27:13.397 [2024-05-15 10:07:50.756694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.397 [2024-05-15 10:07:50.756736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:13.397 [2024-05-15 10:07:50.766706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190df988 00:27:13.397 [2024-05-15 10:07:50.767472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.397 [2024-05-15 10:07:50.767514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:13.397 [2024-05-15 10:07:50.779079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190df988 00:27:13.397 [2024-05-15 10:07:50.780518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.397 [2024-05-15 10:07:50.780562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.789620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ec408 00:27:13.656 [2024-05-15 10:07:50.790612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.790654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.799702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5378 00:27:13.656 [2024-05-15 10:07:50.800470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.800511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.813035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ebb98 00:27:13.656 [2024-05-15 10:07:50.814672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.814709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.822762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fc998 00:27:13.656 [2024-05-15 10:07:50.823891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.823931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.834060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e01f8 00:27:13.656 [2024-05-15 10:07:50.835405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.835445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.843400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f8618 00:27:13.656 [2024-05-15 10:07:50.844131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.844168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.854498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f0ff8 00:27:13.656 [2024-05-15 10:07:50.855518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.855557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.867081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e9168 00:27:13.656 [2024-05-15 10:07:50.868755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.868795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:13.656 [2024-05-15 10:07:50.876537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f96f8 00:27:13.656 [2024-05-15 10:07:50.877565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.656 [2024-05-15 10:07:50.877605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.887481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e0a68 00:27:13.657 [2024-05-15 10:07:50.888817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.888859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.897487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190eb328 00:27:13.657 [2024-05-15 10:07:50.898479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.898525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.907395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f1868 00:27:13.657 [2024-05-15 10:07:50.908164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.908204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.920766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190eaab8 00:27:13.657 [2024-05-15 10:07:50.922490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.922525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.930359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f9f68 00:27:13.657 [2024-05-15 10:07:50.931474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.931515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.941711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e12d8 00:27:13.657 [2024-05-15 10:07:50.943169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.943210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.950340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ebb98 00:27:13.657 [2024-05-15 10:07:50.951122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.951193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.961681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fc998 00:27:13.657 [2024-05-15 10:07:50.962432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.962474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.971398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e49b0 00:27:13.657 [2024-05-15 10:07:50.972175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.972226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.983186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f8a50 00:27:13.657 [2024-05-15 10:07:50.983931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.983971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:50.993362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ebfd0 00:27:13.657 [2024-05-15 10:07:50.994036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:50.994072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:51.005958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ebfd0 00:27:13.657 [2024-05-15 10:07:51.007311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:51.007353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:51.014514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190df118 00:27:13.657 [2024-05-15 10:07:51.015267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:51.015311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:51.026710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f8a50 00:27:13.657 [2024-05-15 10:07:51.027733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:51.027774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:13.657 [2024-05-15 10:07:51.036692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190df988 00:27:13.657 [2024-05-15 10:07:51.037626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.657 [2024-05-15 10:07:51.037668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:13.916 [2024-05-15 10:07:51.047884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f92c0 00:27:13.916 [2024-05-15 10:07:51.048855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.916 [2024-05-15 10:07:51.048896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.916 [2024-05-15 10:07:51.058058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5378 00:27:13.916 [2024-05-15 10:07:51.058820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.916 [2024-05-15 10:07:51.058861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:13.916 [2024-05-15 10:07:51.071856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f8a50 00:27:13.916 [2024-05-15 10:07:51.073474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.916 [2024-05-15 10:07:51.073512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:13.916 [2024-05-15 10:07:51.081489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ebfd0 00:27:13.916 [2024-05-15 10:07:51.082542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.916 [2024-05-15 10:07:51.082578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:13.916 [2024-05-15 10:07:51.092426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190de8a8 00:27:13.916 [2024-05-15 10:07:51.093421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.093462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.102488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f2510 00:27:13.917 [2024-05-15 10:07:51.103408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.103448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.113159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ef6a8 00:27:13.917 [2024-05-15 10:07:51.114082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.114128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.122734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f92c0 00:27:13.917 [2024-05-15 10:07:51.123429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.123466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.135017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190de8a8 00:27:13.917 [2024-05-15 10:07:51.136633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.136669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.142639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e4140 00:27:13.917 [2024-05-15 10:07:51.143361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.143400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.155595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f81e0 00:27:13.917 [2024-05-15 10:07:51.156965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.157017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.165066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f4298 00:27:13.917 [2024-05-15 10:07:51.165774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.165813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.175967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e5220 00:27:13.917 [2024-05-15 10:07:51.176988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.177027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.188824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e88f8 00:27:13.917 [2024-05-15 10:07:51.190535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.190574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.198156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f31b8 00:27:13.917 [2024-05-15 10:07:51.199215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.199257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.209012] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fda78 00:27:13.917 [2024-05-15 10:07:51.210072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.210121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.220392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e5220 00:27:13.917 [2024-05-15 10:07:51.221470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.221523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.230904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e7818 00:27:13.917 [2024-05-15 10:07:51.231985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.232031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.242945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f4298 00:27:13.917 [2024-05-15 10:07:51.244025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.244066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.253222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f8a50 00:27:13.917 [2024-05-15 10:07:51.254202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.254240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.265471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f8a50 00:27:13.917 [2024-05-15 10:07:51.266969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.267007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.272930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fac10 00:27:13.917 [2024-05-15 10:07:51.273617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.273654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.285345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f2d80 00:27:13.917 [2024-05-15 10:07:51.286692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.286734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:13.917 [2024-05-15 10:07:51.294741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e8d30 00:27:13.917 [2024-05-15 10:07:51.295479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:13.917 [2024-05-15 10:07:51.295520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.305776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ef6a8 00:27:14.176 [2024-05-15 10:07:51.306482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.306521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.318506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e84c0 00:27:14.176 [2024-05-15 10:07:51.320117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.320153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.326334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fbcf0 00:27:14.176 [2024-05-15 10:07:51.327030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.327067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.338888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e5220 00:27:14.176 [2024-05-15 10:07:51.340021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.340067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.349336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fc560 00:27:14.176 [2024-05-15 10:07:51.350280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.350321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.359241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190de8a8 00:27:14.176 [2024-05-15 10:07:51.359943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.359980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.369835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f8a50 00:27:14.176 [2024-05-15 10:07:51.370504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.370539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.378984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f4b08 00:27:14.176 [2024-05-15 10:07:51.379692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.379728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.391355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f4b08 00:27:14.176 [2024-05-15 10:07:51.392612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.392648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.401406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e49b0 00:27:14.176 [2024-05-15 10:07:51.402323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.402358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.410490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e8d30 00:27:14.176 [2024-05-15 10:07:51.411378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.411412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.422364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e8d30 00:27:14.176 [2024-05-15 10:07:51.423874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.176 [2024-05-15 10:07:51.423913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:14.176 [2024-05-15 10:07:51.430620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f92c0 00:27:14.177 [2024-05-15 10:07:51.431591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.431633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.440915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fa3a0 00:27:14.177 [2024-05-15 10:07:51.441597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.441638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.450734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f92c0 00:27:14.177 [2024-05-15 10:07:51.451416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.451456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.462708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f92c0 00:27:14.177 [2024-05-15 10:07:51.463866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.463905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.472673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fac10 00:27:14.177 [2024-05-15 10:07:51.473597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.473637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.482003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fa3a0 00:27:14.177 [2024-05-15 10:07:51.482943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.482979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.494383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fa3a0 00:27:14.177 [2024-05-15 10:07:51.495939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.495975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.505136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e1710 00:27:14.177 [2024-05-15 10:07:51.506415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.506454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.515066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e1710 00:27:14.177 [2024-05-15 10:07:51.516400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.516436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.525868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5378 00:27:14.177 [2024-05-15 10:07:51.526994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.527031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.535965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fac10 00:27:14.177 [2024-05-15 10:07:51.536897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.536935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.546944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f9b30 00:27:14.177 [2024-05-15 10:07:51.547923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.547961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:14.177 [2024-05-15 10:07:51.557172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fbcf0 00:27:14.177 [2024-05-15 10:07:51.558145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.177 [2024-05-15 10:07:51.558198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:14.473 [2024-05-15 10:07:51.570027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fbcf0 00:27:14.473 [2024-05-15 10:07:51.571554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.473 [2024-05-15 10:07:51.571752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:14.473 [2024-05-15 10:07:51.579895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fe720 00:27:14.473 [2024-05-15 10:07:51.581037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.473 [2024-05-15 10:07:51.581224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:14.473 [2024-05-15 10:07:51.590967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e1710 00:27:14.473 [2024-05-15 10:07:51.592529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.473 [2024-05-15 10:07:51.592705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.473 [2024-05-15 10:07:51.604246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f2d80 00:27:14.473 [2024-05-15 10:07:51.606005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.473 [2024-05-15 10:07:51.606198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:14.473 [2024-05-15 10:07:51.612402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e3060 00:27:14.473 [2024-05-15 10:07:51.613543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.473 [2024-05-15 10:07:51.613722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:14.473 [2024-05-15 10:07:51.626203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e8d30 00:27:14.473 [2024-05-15 10:07:51.627382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.473 [2024-05-15 10:07:51.627562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:14.473 [2024-05-15 10:07:51.636347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e1f80 00:27:14.473 [2024-05-15 10:07:51.637477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.473 [2024-05-15 10:07:51.637647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:14.473 [2024-05-15 10:07:51.647531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e1710 00:27:14.473 [2024-05-15 10:07:51.648417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.473 [2024-05-15 10:07:51.648586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.660187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fb480 00:27:14.474 [2024-05-15 10:07:51.661394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.661570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.671008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fe720 00:27:14.474 [2024-05-15 10:07:51.672537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.672711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.682507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5be8 00:27:14.474 [2024-05-15 10:07:51.683682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.683877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.695152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5be8 00:27:14.474 [2024-05-15 10:07:51.696881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.697066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.706814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190de038 00:27:14.474 [2024-05-15 10:07:51.708254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.708441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.717149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e0ea0 00:27:14.474 [2024-05-15 10:07:51.718399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.718587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.727842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e84c0 00:27:14.474 [2024-05-15 10:07:51.728668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.728832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.740455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e3060 00:27:14.474 [2024-05-15 10:07:51.741854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.742014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.750594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e3060 00:27:14.474 [2024-05-15 10:07:51.751988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.752174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.761684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5378 00:27:14.474 [2024-05-15 10:07:51.762810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.763004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.773099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e8d30 00:27:14.474 [2024-05-15 10:07:51.774386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.774572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.787639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e95a0 00:27:14.474 [2024-05-15 10:07:51.789074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.789256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.797900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e95a0 00:27:14.474 [2024-05-15 10:07:51.799335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.799500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.810783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e0ea0 00:27:14.474 [2024-05-15 10:07:51.812006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.812215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.822765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e8d30 00:27:14.474 [2024-05-15 10:07:51.824029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.824219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.836781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e8d30 00:27:14.474 [2024-05-15 10:07:51.838352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.838533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:14.474 [2024-05-15 10:07:51.847853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5be8 00:27:14.474 [2024-05-15 10:07:51.849159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.474 [2024-05-15 10:07:51.849347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.858260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f2d80 00:27:14.734 [2024-05-15 10:07:51.859778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.859978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.872490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190eee38 00:27:14.734 [2024-05-15 10:07:51.874357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.874554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.883474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e1710 00:27:14.734 [2024-05-15 10:07:51.884654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.884821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.893732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f6458 00:27:14.734 [2024-05-15 10:07:51.894865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.895027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.903964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e9e10 00:27:14.734 [2024-05-15 10:07:51.905078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.905261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.917083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e3060 00:27:14.734 [2024-05-15 10:07:51.918538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.918706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.927278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e3060 00:27:14.734 [2024-05-15 10:07:51.928697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.928861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.937872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e1f80 00:27:14.734 [2024-05-15 10:07:51.938805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.938961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.947300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e9e10 00:27:14.734 [2024-05-15 10:07:51.948301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.948454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.960051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190eff18 00:27:14.734 [2024-05-15 10:07:51.961208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.961377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.970418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fda78 00:27:14.734 [2024-05-15 10:07:51.971511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.971680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.983834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e0ea0 00:27:14.734 [2024-05-15 10:07:51.985365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.985531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:51.994017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fb480 00:27:14.734 [2024-05-15 10:07:51.995074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:51.995260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:52.004481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e84c0 00:27:14.734 [2024-05-15 10:07:52.005480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:52.005645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:52.014648] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e1f80 00:27:14.734 [2024-05-15 10:07:52.015604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:52.015769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:52.027074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5be8 00:27:14.734 [2024-05-15 10:07:52.028391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:52.028575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:52.040440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ea248 00:27:14.734 [2024-05-15 10:07:52.042983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:52.043220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:52.049958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190dece0 00:27:14.734 [2024-05-15 10:07:52.051205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:52.051368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:52.062584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ec408 00:27:14.734 [2024-05-15 10:07:52.064346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:52.064507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:52.072798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e4de8 00:27:14.734 [2024-05-15 10:07:52.073816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:52.073979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:14.734 [2024-05-15 10:07:52.082840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e27f0 00:27:14.734 [2024-05-15 10:07:52.083857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.734 [2024-05-15 10:07:52.084027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:14.735 [2024-05-15 10:07:52.093222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e4578 00:27:14.735 [2024-05-15 10:07:52.094199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.735 [2024-05-15 10:07:52.094363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:14.735 [2024-05-15 10:07:52.106450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e9168 00:27:14.735 [2024-05-15 10:07:52.108035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.735 [2024-05-15 10:07:52.108246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.117782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fd208 00:27:14.994 [2024-05-15 10:07:52.119012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.119222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.128211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f0350 00:27:14.994 [2024-05-15 10:07:52.129432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.129599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.139539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f8e88 00:27:14.994 [2024-05-15 10:07:52.140446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.140614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.149680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ebb98 00:27:14.994 [2024-05-15 10:07:52.150557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.150726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.163248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e88f8 00:27:14.994 [2024-05-15 10:07:52.164652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.164819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.175110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e9168 00:27:14.994 [2024-05-15 10:07:52.175994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.176215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.186465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ebb98 00:27:14.994 [2024-05-15 10:07:52.187457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.187645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.197368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ef270 00:27:14.994 [2024-05-15 10:07:52.198261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.198428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.211047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190df550 00:27:14.994 [2024-05-15 10:07:52.212399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.212576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.221977] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f2948 00:27:14.994 [2024-05-15 10:07:52.222998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.223189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.232094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ef270 00:27:14.994 [2024-05-15 10:07:52.233165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.233368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.245898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ee190 00:27:14.994 [2024-05-15 10:07:52.247728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.247920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.257127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fc560 00:27:14.994 [2024-05-15 10:07:52.258136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.258310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.268220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ed4e8 00:27:14.994 [2024-05-15 10:07:52.269509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.269689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.282166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f5be8 00:27:14.994 [2024-05-15 10:07:52.284070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.284258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.293193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fe720 00:27:14.994 [2024-05-15 10:07:52.294500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.294673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.302993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e6300 00:27:14.994 [2024-05-15 10:07:52.304399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.304572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.314790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190feb58 00:27:14.994 [2024-05-15 10:07:52.315908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.316107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.325245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f0bc0 00:27:14.994 [2024-05-15 10:07:52.326140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.326306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.338484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e01f8 00:27:14.994 [2024-05-15 10:07:52.340040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.340255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.349562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fdeb0 00:27:14.994 [2024-05-15 10:07:52.350613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.350782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.359856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f1868 00:27:14.994 [2024-05-15 10:07:52.360962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.994 [2024-05-15 10:07:52.361153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:14.994 [2024-05-15 10:07:52.373326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fac10 00:27:14.994 [2024-05-15 10:07:52.374871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:14.995 [2024-05-15 10:07:52.375038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.384962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fef90 00:27:15.254 [2024-05-15 10:07:52.386073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.386253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.395346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f1868 00:27:15.254 [2024-05-15 10:07:52.396469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.396633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.408512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e01f8 00:27:15.254 [2024-05-15 10:07:52.409678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.409844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.418670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ed4e8 00:27:15.254 [2024-05-15 10:07:52.419932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.420105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.431732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fdeb0 00:27:15.254 [2024-05-15 10:07:52.433176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.433370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.441803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e6b70 00:27:15.254 [2024-05-15 10:07:52.443063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.443284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.454944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fe720 00:27:15.254 [2024-05-15 10:07:52.456909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.457065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.462812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fa7d8 00:27:15.254 [2024-05-15 10:07:52.463867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.464030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.475060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190edd58 00:27:15.254 [2024-05-15 10:07:52.476291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.476457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.484989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e01f8 00:27:15.254 [2024-05-15 10:07:52.486190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.486349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.497671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e6300 00:27:15.254 [2024-05-15 10:07:52.498933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.499105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.509336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e6300 00:27:15.254 [2024-05-15 10:07:52.511239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.511426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.517252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ed920 00:27:15.254 [2024-05-15 10:07:52.518280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.518446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.529889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fd640 00:27:15.254 [2024-05-15 10:07:52.531414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.531617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.540592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fbcf0 00:27:15.254 [2024-05-15 10:07:52.542063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.254 [2024-05-15 10:07:52.542253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:15.254 [2024-05-15 10:07:52.551440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fda78 00:27:15.254 [2024-05-15 10:07:52.552366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.255 [2024-05-15 10:07:52.552531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:15.255 [2024-05-15 10:07:52.561237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ed920 00:27:15.255 [2024-05-15 10:07:52.562063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.255 [2024-05-15 10:07:52.562232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:15.255 [2024-05-15 10:07:52.574081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fbcf0 00:27:15.255 [2024-05-15 10:07:52.575375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.255 [2024-05-15 10:07:52.575537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:15.255 [2024-05-15 10:07:52.586783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190df988 00:27:15.255 [2024-05-15 10:07:52.588553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.255 [2024-05-15 10:07:52.588716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:15.255 [2024-05-15 10:07:52.594819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190de038 00:27:15.255 [2024-05-15 10:07:52.595836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.255 [2024-05-15 10:07:52.596017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:15.255 [2024-05-15 10:07:52.607780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ed4e8 00:27:15.255 [2024-05-15 10:07:52.608873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.255 [2024-05-15 10:07:52.609047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:15.255 [2024-05-15 10:07:52.617782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190e0630 00:27:15.255 [2024-05-15 10:07:52.618817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.255 [2024-05-15 10:07:52.618975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:15.255 [2024-05-15 10:07:52.630813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f1ca0 00:27:15.255 [2024-05-15 10:07:52.632040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.255 [2024-05-15 10:07:52.632228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:15.512 [2024-05-15 10:07:52.641094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190ff3c8 00:27:15.512 [2024-05-15 10:07:52.642319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.512 [2024-05-15 10:07:52.642484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:15.512 [2024-05-15 10:07:52.654360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fbcf0 00:27:15.512 [2024-05-15 10:07:52.655778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.512 [2024-05-15 10:07:52.655951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:15.512 [2024-05-15 10:07:52.664832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fa7d8 00:27:15.512 [2024-05-15 10:07:52.666169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.512 [2024-05-15 10:07:52.666334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:15.512 [2024-05-15 10:07:52.675889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190f1868 00:27:15.512 [2024-05-15 10:07:52.676923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.512 [2024-05-15 10:07:52.677100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:15.512 [2024-05-15 10:07:52.686395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x17daf70) with pdu=0x2000190fd208 00:27:15.512 [2024-05-15 10:07:52.687417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:15.512 [2024-05-15 10:07:52.687582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:15.512 00:27:15.512 Latency(us) 00:27:15.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.512 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:15.512 nvme0n1 : 2.01 22896.89 89.44 0.00 0.00 5584.55 2652.65 14605.17 00:27:15.512 =================================================================================================================== 00:27:15.512 Total : 22896.89 89.44 0.00 0.00 5584.55 2652.65 14605.17 00:27:15.512 0 00:27:15.512 10:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:15.512 10:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:15.512 10:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:15.512 | .driver_specific 00:27:15.512 | .nvme_error 00:27:15.512 | .status_code 00:27:15.513 | .command_transient_transport_error' 00:27:15.513 10:07:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 180 > 0 )) 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93503 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 93503 ']' 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 93503 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93503 00:27:15.771 killing process with pid 93503 00:27:15.771 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.771 00:27:15.771 Latency(us) 00:27:15.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.771 =================================================================================================================== 00:27:15.771 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93503' 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 93503 00:27:15.771 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 93503 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93593 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93593 /var/tmp/bperf.sock 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 93593 ']' 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:16.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:16.337 10:07:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:16.337 [2024-05-15 10:07:53.518295] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:27:16.337 [2024-05-15 10:07:53.518707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93593 ] 00:27:16.337 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:16.337 Zero copy mechanism will not be used. 00:27:16.337 [2024-05-15 10:07:53.656600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.594 [2024-05-15 10:07:53.812176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.160 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:17.160 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:27:17.160 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:17.160 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:17.418 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:17.418 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.418 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.418 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.418 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.418 10:07:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.677 nvme0n1 00:27:17.935 10:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:17.935 10:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:17.935 10:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.935 10:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:17.935 10:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:17.935 10:07:55 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:17.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:17.935 Zero copy mechanism will not be used. 00:27:17.935 Running I/O for 2 seconds... 00:27:17.935 [2024-05-15 10:07:55.234820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.935 [2024-05-15 10:07:55.235176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.235228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.239307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.239716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.243806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.244167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.244207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.248373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.248742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.248781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.252981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.253329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.253378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.257408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.257744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.257783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.261714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.262029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.262064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.266245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.266566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.266592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.270526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.270816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.270838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.274786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.275058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.275079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.278983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.279340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.279372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.283426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.283749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.283772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.287923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.288248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.288285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.292649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.292969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.293003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.297147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.297461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.297493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.301639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.301945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.301980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.306206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.306520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.306552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.310929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.311273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.311307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.936 [2024-05-15 10:07:55.315694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:17.936 [2024-05-15 10:07:55.316029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.936 [2024-05-15 10:07:55.316064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.320584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.320922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.320957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.325284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.325601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.325654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.329782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.330107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.330143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.334137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.334410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.334433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.338324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.338610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.338633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.342552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.342857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.342883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.346775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.347101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.347153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.351101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.351427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.351458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.355212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.355505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.355533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.359476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.359772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.359796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.363545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.363831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.363865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.367610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.367867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.367888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.371410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.371672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.371717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.375377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.375646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.375668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.379066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.379363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.379391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.382894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.383191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.383214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.386802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.387051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.387073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.390604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.390824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.390842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.394576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.394830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.394866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.398717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.398968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.398989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.197 [2024-05-15 10:07:55.402658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.197 [2024-05-15 10:07:55.402885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.197 [2024-05-15 10:07:55.402905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.406616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.406862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.406891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.410396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.410622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.410641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.414201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.414424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.414443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.418053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.418323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.418352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.422319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.422581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.422604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.426595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.426881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.426914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.430555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.430798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.430819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.434393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.434617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.434637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.438386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.438635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.438692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.442620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.442899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.442937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.446577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.446833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.446855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.450582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.450828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.450850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.454486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.454741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.454762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.458354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.458599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.458620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.462344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.462604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.462627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.466198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.466419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.466439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.469961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.470218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.470239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.473853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.474081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.474113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.477680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.477911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.477931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.481985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.482310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.482342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.486419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.486702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.486725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.490616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.490863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.490894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.494838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.495118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.495157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.498641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.498897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.498919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.198 [2024-05-15 10:07:55.502555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.198 [2024-05-15 10:07:55.502808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.198 [2024-05-15 10:07:55.502829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.506428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.506680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.506700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.510413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.510660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.510686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.514816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.515084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.515134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.519312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.519579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.519602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.523231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.523490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.523513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.527166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.527439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.527460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.530990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.531286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.531308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.534832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.535086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.535119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.538948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.539242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.539264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.542911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.543210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.543233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.546761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.547017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.547037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.550794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.551042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.551065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.554671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.554929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.554950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.558593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.558856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.558877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.562431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.562693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.562714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.565967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.566200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.566224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.569669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.569871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.569895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.573393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.573573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.573595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.199 [2024-05-15 10:07:55.577206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.199 [2024-05-15 10:07:55.577396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.199 [2024-05-15 10:07:55.577418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.460 [2024-05-15 10:07:55.581177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.460 [2024-05-15 10:07:55.581371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.460 [2024-05-15 10:07:55.581393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.460 [2024-05-15 10:07:55.585033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.460 [2024-05-15 10:07:55.585226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.460 [2024-05-15 10:07:55.585247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.460 [2024-05-15 10:07:55.588714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.588905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.588925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.592431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.592654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.592676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.596283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.596562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.596596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.600271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.600461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.600490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.603818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.603980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.607374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.607478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.607503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.610765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.610840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.610863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.614221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.614304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.614326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.617875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.617970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.617993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.621388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.621500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.621520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.624923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.625064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.625085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.628659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.628761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.628785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.632449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.632605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.632627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.636120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.636273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.636294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.639925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.640012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.640035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.643575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.643670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.643700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.647247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.647331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.647355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.650967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.651053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.651075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.654852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.654940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.654963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.658552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.658653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.658682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.662280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.662380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.662401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.665825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.665940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.665960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.669371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.669449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.669469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.672966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.673069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.673091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.676548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.676611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.676632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.461 [2024-05-15 10:07:55.680527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.461 [2024-05-15 10:07:55.680612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.461 [2024-05-15 10:07:55.680635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.684783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.684925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.684948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.688630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.688762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.688785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.692389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.692458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.692480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.696159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.696267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.696290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.699710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.699823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.699845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.703587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.703740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.703764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.707440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.707518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.707543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.711208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.711321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.711344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.714915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.715000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.718788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.718897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.718921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.722696] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.722787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.722811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.726550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.726659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.730432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.730520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.730543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.734360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.734459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.734487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.738431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.738527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.738551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.742589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.742691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.742715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.746786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.746866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.746890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.750968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.751105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.751128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.754860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.754974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.754996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.758761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.758926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.758948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.762609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.762721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.762745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.766481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.766584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.766605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.770217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.770348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.770370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.773923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.774093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.774115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.777678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.777792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.777813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.781328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.781458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.781479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.785108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.785256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.462 [2024-05-15 10:07:55.785278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.462 [2024-05-15 10:07:55.788835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.462 [2024-05-15 10:07:55.788959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.788981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.792729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.792869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.792892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.796368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.796473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.796496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.800147] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.800269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.800292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.803726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.803851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.803877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.807318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.807401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.807426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.811100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.811191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.811214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.814766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.814884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.814906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.818431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.818526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.818549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.822149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.822235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.822258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.825921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.826008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.826029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.829525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.829679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.829700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.833281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.833383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.833403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.836802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.836879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.836899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.463 [2024-05-15 10:07:55.840617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.463 [2024-05-15 10:07:55.840717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.463 [2024-05-15 10:07:55.840739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.844624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.844755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.844777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.848385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.848446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.848467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.852351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.852421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.852443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.856306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.856403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.856435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.860058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.860141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.860163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.863546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.863653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.863674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.867187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.867367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.867395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.870796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.870875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.870895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.874413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.874495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.874517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.878087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.878228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.878251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.881749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.881831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.881853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.885483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.885555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.885577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.889205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.889354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.889376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.892873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.893005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.893027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.896558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.896642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.896664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.900442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.900591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.900618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.904460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.904640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.904678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.908399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.908481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.908503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.912219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.912362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.912385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.916278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.916379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.916401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.920237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.920325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.920346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.923953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.924042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.924065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.927549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.927618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.927641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.930994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.931132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.931160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.934760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.934880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.726 [2024-05-15 10:07:55.934903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.726 [2024-05-15 10:07:55.938732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.726 [2024-05-15 10:07:55.938810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.938833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.943021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.943117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.943148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.947031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.947208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.947231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.950732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.950823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.950845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.954676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.954762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.954784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.958489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.958597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.958620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.962249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.962377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.962401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.966049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.966154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.966180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.969939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.970044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.970070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.973763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.973855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.973877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.977551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.977627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.977648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.981182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.981249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.981269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.984934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.985036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.985065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.988683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.988789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.988811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.992747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.992837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.992860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:55.996731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:55.996818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:55.996840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.000634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.000723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.000743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.004612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.004759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.004781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.008691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.008769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.008789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.012425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.012505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.012525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.016062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.016167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.016190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.019615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.019695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.019717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.023097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.023273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.023296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.026670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.026776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.026797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.030265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.030366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.030385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.033810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.033871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.033892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.037377] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.037537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.037557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.041282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.041367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.041387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.727 [2024-05-15 10:07:56.045302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.727 [2024-05-15 10:07:56.045384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.727 [2024-05-15 10:07:56.045407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.049073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.049203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.049225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.052876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.052987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.053010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.056718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.056838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.056861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.060540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.060686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.060713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.064381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.064457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.064479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.068207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.068317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.068338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.071953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.072083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.072118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.075790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.075900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.075923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.079590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.079656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.079679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.083394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.083488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.083512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.087235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.087315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.087338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.090997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.091066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.091087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.094647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.094746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.094767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.098387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.098494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.098519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.102151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.102269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.102292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.728 [2024-05-15 10:07:56.105967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.728 [2024-05-15 10:07:56.106101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.728 [2024-05-15 10:07:56.106122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.109810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.109933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.109953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.113440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.113519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.113538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.117215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.117322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.117343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.120925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.121052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.121073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.124600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.124730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.124751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.128124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.128241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.128262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.131832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.131964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.131985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.135616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.135784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.135809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.139304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.139430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.139453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.142961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.143085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.143107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.146732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.146848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.146869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.150389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.150455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.150477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.154080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.988 [2024-05-15 10:07:56.154172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.988 [2024-05-15 10:07:56.154192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.988 [2024-05-15 10:07:56.157902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.157993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.158017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.161897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.161989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.162011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.165906] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.165983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.166006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.169798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.169900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.169925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.173870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.173993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.174018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.177736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.177878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.177902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.181589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.181753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.181787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.185434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.185532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.185555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.189302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.189404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.189429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.193021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.193123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.193161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.196937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.197012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.197035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.201244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.201336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.201360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.205275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.205360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.205383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.209148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.209245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.209269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.212984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.213054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.213077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.216768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.216920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.216943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.220370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.220547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.220578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.224047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.224128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.224150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.227713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.227781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.227803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.231472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.231541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.231563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.235088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.235201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.235224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.238675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.238753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.238775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.242354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.242438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.242459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.245859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.245982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.246002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.249370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.249429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.249449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.253076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.253207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.253229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.256989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.257078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.257101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.260900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.989 [2024-05-15 10:07:56.261012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.989 [2024-05-15 10:07:56.261033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.989 [2024-05-15 10:07:56.264958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.265050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.265074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.268973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.269048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.269071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.272670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.272758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.272780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.276343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.276430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.276451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.279927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.280053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.280100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.283626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.283732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.283755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.287016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.287219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.287264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.290496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.290574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.290593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.294070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.294218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.294238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.297602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.297720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.297739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.301095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.301284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.301314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.304771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.304897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.304919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.308325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.308407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.308427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.311728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.311805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.311827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.315347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.315414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.315435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.319212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.319282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.319306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.322944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.323028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.323049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.326639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.326716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.326737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.330467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.330612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.330635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.334342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.334465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.334488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.338127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.338211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.338234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.341951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.342058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.342081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.345884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.345966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.345989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.349739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.349823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.349845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.353659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.353748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.353771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.357561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.357664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.357687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.361383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.361454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.361476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.365152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.990 [2024-05-15 10:07:56.365250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.990 [2024-05-15 10:07:56.365273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.990 [2024-05-15 10:07:56.369052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:18.991 [2024-05-15 10:07:56.369141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.991 [2024-05-15 10:07:56.369164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.373112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.373184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.373209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.376964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.377049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.377088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.380840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.380938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.380960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.384464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.384541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.384562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.388069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.388192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.388214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.391599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.391669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.391690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.395137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.395218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.395239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.398675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.398808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.398829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.402418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.402553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.402591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.405994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.406119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.406140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.409665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.409749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.409770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.413290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.413354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.413375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.416918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.416987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.417007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.420609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.420676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.420696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.424316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.424384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.424406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.428315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.428376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.428397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.431869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.251 [2024-05-15 10:07:56.431943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.251 [2024-05-15 10:07:56.431963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.251 [2024-05-15 10:07:56.435611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.435701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.435723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.439437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.439524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.439547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.443218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.443292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.443326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.446782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.446854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.446874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.450319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.450440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.450461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.453931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.454027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.454047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.457886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.458000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.458020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.461824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.461910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.461929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.465516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.465633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.465654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.469015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.469095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.469125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.472681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.472776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.472809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.476351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.476423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.476445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.479971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.480123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.480146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.483509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.483577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.483600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.487036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.487125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.487157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.490860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.490946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.490966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.494420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.494549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.494570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.497907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.498003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.498024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.501604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.501677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.501697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.505298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.505422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.505443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.508925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.509038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.509060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.512697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.512781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.512804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.516587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.516674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.516696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.520677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.520777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.520800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.524868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.525017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.525042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.528796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.528898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.528922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.532487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.532563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.532587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.536247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.252 [2024-05-15 10:07:56.536326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.252 [2024-05-15 10:07:56.536348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.252 [2024-05-15 10:07:56.539971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.540044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.540068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.543687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.543763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.543785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.547123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.547248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.547268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.550651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.550725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.550744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.554315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.554400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.554421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.557929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.558003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.558022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.561547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.561615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.561635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.565001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.565174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.565206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.568806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.568924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.568961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.572308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.572436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.572473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.575775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.575885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.575906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.579435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.579503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.579526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.582933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.582991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.583010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.586518] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.586578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.586597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.590258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.590327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.590348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.594218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.594305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.594325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.598011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.598118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.598140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.601624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.601766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.601809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.605391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.605478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.605500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.608973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.609084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.609105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.612653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.612730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.612750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.616334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.616442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.616463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.620181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.620262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.620296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.623681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.623777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.623799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.627210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.253 [2024-05-15 10:07:56.627293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.253 [2024-05-15 10:07:56.627314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.253 [2024-05-15 10:07:56.630872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.254 [2024-05-15 10:07:56.630947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.254 [2024-05-15 10:07:56.630967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.634861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.634942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.634963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.638457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.638596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.638634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.642239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.642320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.642342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.645965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.646046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.646065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.649550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.649658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.649678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.653091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.653181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.653201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.656783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.656881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.656904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.660599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.660682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.660704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.664281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.664426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.664448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.667748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.667854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.667875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.671677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.671763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.671787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.675548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.675662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.675684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.679065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.679146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.679165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.682854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.682927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.682947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.686689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.686844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.686866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.690487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.690596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.690616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.694375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.694454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.694475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.698154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.698265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.698285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.701925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.702061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.702081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.705527] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.705632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.705653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.709211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.709292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.709312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.712853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.712934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.712954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.716727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.716812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.716845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.720764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.720844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.720866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.724545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.724665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.724686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.728153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.728225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.516 [2024-05-15 10:07:56.728247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.516 [2024-05-15 10:07:56.731776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.516 [2024-05-15 10:07:56.731867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.731889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.735511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.735662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.735695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.738985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.739045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.739064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.742561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.742677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.742697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.746082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.746249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.746273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.749759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.749824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.749846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.753565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.753635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.753658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.757321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.757440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.757460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.761090] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.761191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.761213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.764947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.765016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.765039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.768681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.768757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.768779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.772349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.772475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.772498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.776185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.776263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.776284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.780030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.780160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.780188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.783666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.783808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.783845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.787333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.787424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.787445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.790932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.791001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.791022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.794493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.794551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.794571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.797946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.798027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.798047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.801472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.801532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.801552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.805031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.805163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.805183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.808477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.808575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.808596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.812070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.812191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.812211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.815570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.815691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.815711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.818965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.819099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.819130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.822589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.822710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.822730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.826218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.826314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.826334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.830003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.830083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.517 [2024-05-15 10:07:56.830104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.517 [2024-05-15 10:07:56.833796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.517 [2024-05-15 10:07:56.833922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.833945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.837568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.837642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.837661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.841091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.841218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.841238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.844701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.844782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.844801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.848313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.848464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.848494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.851788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.851909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.851930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.855190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.855320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.855340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.858791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.858867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.858886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.862325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.862437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.862456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.865714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.865794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.865814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.869158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.869251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.869270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.872843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.872946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.872967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.876411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.876473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.876494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.879870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.879946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.879968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.883312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.883375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.883395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.886950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.887021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.887042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.890513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.890615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.890634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.518 [2024-05-15 10:07:56.894275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.518 [2024-05-15 10:07:56.894372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.518 [2024-05-15 10:07:56.894392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.898266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.898341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.898365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.902934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.903026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.903048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.907973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.908081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.908116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.913366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.913512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.913537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.917911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.918050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.918076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.921861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.921937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.921961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.925885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.926000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.926024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.930608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.930709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.930738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.935873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.935974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.936012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.941183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.941305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.941331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.945138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.945242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.945266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.950344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.800 [2024-05-15 10:07:56.950466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.800 [2024-05-15 10:07:56.950491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.800 [2024-05-15 10:07:56.955602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.955702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.955727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.959496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.959591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.959616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.963436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.963546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.963569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.967314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.967390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.967414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.971380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.971494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.971517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.975238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.975375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.975399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.979217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.979291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.979314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.983072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.983201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.983224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.986886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.986982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.987005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.991367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.991499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.991522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.995287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.995361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.995384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:56.998835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:56.998945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:56.998966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.002604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.002709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.002730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.006203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.006325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.006345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.009751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.009884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.009904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.013493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.013554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.013574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.017086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.017226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.017246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.020694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.020795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.020817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.024488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.024659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.024680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.028333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.028421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.028442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.031953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.032029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.032051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.035533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.035620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.035642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.039019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.039149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.039185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.801 [2024-05-15 10:07:57.042537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.801 [2024-05-15 10:07:57.042615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.801 [2024-05-15 10:07:57.042634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.046134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.046230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.046252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.049657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.049739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.049758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.053164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.053269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.053291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.056770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.056893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.056914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.060318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.060387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.060407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.063809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.063931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.063953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.067348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.067411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.067433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.070885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.070975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.070996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.074541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.074622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.078317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.078381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.078401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.081733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.081803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.081822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.085342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.085466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.085489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.089322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.089427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.089448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.093338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.093451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.093472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.096946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.097042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.097064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.100596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.100679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.100701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.104230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.104316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.104336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.107974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.108145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.108167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.111772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.111875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.111897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.115519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.115635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.115658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.119255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.119390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.119413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.123008] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.123078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.123100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.126722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.126859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.126879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.802 [2024-05-15 10:07:57.130506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.802 [2024-05-15 10:07:57.130615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.802 [2024-05-15 10:07:57.130636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.134236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.134337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.134358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.138051] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.138203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.138225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.141767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.141832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.141854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.145411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.145529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.145550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.149062] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.149195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.149216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.152892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.152983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.153005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.156617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.156701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.156723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.160335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.160436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.160457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.164763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.164876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.164897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.169945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.170015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.170038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.173641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.173750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.173770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.177247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.177331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.177352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:19.803 [2024-05-15 10:07:57.180756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:19.803 [2024-05-15 10:07:57.180876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:19.803 [2024-05-15 10:07:57.180895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.184576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.184730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.184754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.188490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.188590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.188613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.192416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.192555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.192582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.196218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.196339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.196364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.199774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.199881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.199906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.203515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.203619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.203645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.207222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.207291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.207318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.210832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.210978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.211002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.214623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.214692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.214718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.218351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.218429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.218452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:20.062 [2024-05-15 10:07:57.222058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1614430) with pdu=0x2000190fef90 00:27:20.062 [2024-05-15 10:07:57.222158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:20.062 [2024-05-15 10:07:57.222181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:20.062 00:27:20.062 Latency(us) 00:27:20.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.062 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:20.062 nvme0n1 : 2.00 8074.23 1009.28 0.00 0.00 1977.27 1458.96 11421.99 00:27:20.062 =================================================================================================================== 00:27:20.062 Total : 8074.23 1009.28 0.00 0.00 1977.27 1458.96 11421.99 00:27:20.062 0 00:27:20.062 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:20.062 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:20.062 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:20.062 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:20.062 | .driver_specific 00:27:20.062 | .nvme_error 00:27:20.062 | .status_code 00:27:20.062 | .command_transient_transport_error' 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 521 > 0 )) 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93593 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 93593 ']' 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 93593 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93593 00:27:20.320 killing process with pid 93593 00:27:20.320 Received shutdown signal, test time was about 2.000000 seconds 00:27:20.320 00:27:20.320 Latency(us) 00:27:20.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.320 =================================================================================================================== 00:27:20.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93593' 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 93593 00:27:20.320 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 93593 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93277 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 93277 ']' 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 93277 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93277 00:27:20.887 killing process with pid 93277 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93277' 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 93277 00:27:20.887 [2024-05-15 10:07:57.995685] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:20.887 10:07:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 93277 00:27:21.145 00:27:21.145 real 0m19.794s 00:27:21.145 user 0m37.035s 00:27:21.145 sys 0m5.748s 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.145 ************************************ 00:27:21.145 END TEST nvmf_digest_error 00:27:21.145 ************************************ 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:21.145 rmmod nvme_tcp 00:27:21.145 rmmod nvme_fabrics 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93277 ']' 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93277 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 93277 ']' 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 93277 00:27:21.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (93277) - No such process 00:27:21.145 Process with pid 93277 is not found 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 93277 is not found' 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.145 10:07:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.405 10:07:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:21.405 00:27:21.405 real 0m41.315s 00:27:21.405 user 1m16.220s 00:27:21.405 sys 0m12.107s 00:27:21.405 10:07:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:21.405 10:07:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 ************************************ 00:27:21.405 END TEST nvmf_digest 00:27:21.405 ************************************ 00:27:21.405 10:07:58 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:27:21.405 10:07:58 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:27:21.405 10:07:58 nvmf_tcp -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:21.405 10:07:58 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:21.405 10:07:58 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:21.405 10:07:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 ************************************ 00:27:21.405 START TEST nvmf_mdns_discovery 00:27:21.405 ************************************ 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:21.405 * Looking for test storage... 00:27:21.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:21.405 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:21.406 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:21.406 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.406 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:21.406 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:21.406 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:21.406 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:21.406 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:21.665 Cannot find device "nvmf_tgt_br" 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:21.665 Cannot find device "nvmf_tgt_br2" 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:21.665 Cannot find device "nvmf_tgt_br" 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:21.665 Cannot find device "nvmf_tgt_br2" 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:21.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:21.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:21.665 10:07:58 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:21.665 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:21.665 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:21.665 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:21.665 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:21.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:27:21.925 00:27:21.925 --- 10.0.0.2 ping statistics --- 00:27:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.925 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:21.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:21.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:27:21.925 00:27:21.925 --- 10.0.0.3 ping statistics --- 00:27:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.925 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:21.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:27:21.925 00:27:21.925 --- 10.0.0.1 ping statistics --- 00:27:21.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.925 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93893 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93893 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@828 -- # '[' -z 93893 ']' 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:21.925 10:07:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.925 [2024-05-15 10:07:59.275063] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:27:21.925 [2024-05-15 10:07:59.275210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.183 [2024-05-15 10:07:59.425873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.483 [2024-05-15 10:07:59.600300] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.483 [2024-05-15 10:07:59.600604] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.483 [2024-05-15 10:07:59.600747] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.483 [2024-05-15 10:07:59.600820] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.483 [2024-05-15 10:07:59.600863] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.483 [2024-05-15 10:07:59.600937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@861 -- # return 0 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.052 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.053 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.053 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:27:23.053 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.053 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 [2024-05-15 10:08:00.564442] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 [2024-05-15 10:08:00.576363] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:23.311 [2024-05-15 10:08:00.576812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 null0 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 null1 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 null2 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 null3 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=93943 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 93943 /tmp/host.sock 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@828 -- # '[' -z 93943 ']' 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:23.311 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:23.311 10:08:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.569 [2024-05-15 10:08:00.704432] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:27:23.569 [2024-05-15 10:08:00.704832] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93943 ] 00:27:23.569 [2024-05-15 10:08:00.853165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.827 [2024-05-15 10:08:01.026902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.392 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:24.392 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@861 -- # return 0 00:27:24.392 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:27:24.392 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:27:24.392 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:27:24.650 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=93972 00:27:24.650 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:27:24.650 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:27:24.650 10:08:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:27:24.650 Process 914 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:27:24.650 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:27:24.650 Successfully dropped root privileges. 00:27:25.598 avahi-daemon 0.8 starting up. 00:27:25.598 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:27:25.598 Successfully called chroot(). 00:27:25.598 Successfully dropped remaining capabilities. 00:27:25.598 No service file found in /etc/avahi/services. 00:27:25.598 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:25.598 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:27:25.598 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:25.598 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:27:25.598 Network interface enumeration completed. 00:27:25.598 Registering new address record for fe80::d0d6:3fff:feb4:1d28 on nvmf_tgt_if2.*. 00:27:25.598 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:27:25.598 Registering new address record for fe80::b88a:dbff:fe0f:2460 on nvmf_tgt_if.*. 00:27:25.598 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:27:25.598 Server startup complete. Host name is fedora38-cloud-1701806725-069-updated-1701632595.local. Local service cookie is 3511667305. 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:25.598 10:08:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.856 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.857 [2024-05-15 10:08:03.176713] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:25.857 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.116 [2024-05-15 10:08:03.245299] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.116 [2024-05-15 10:08:03.297261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.116 [2024-05-15 10:08:03.309246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:26.116 10:08:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:27:27.051 [2024-05-15 10:08:04.076726] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:27.310 [2024-05-15 10:08:04.676794] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:27.310 [2024-05-15 10:08:04.676855] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1701806725-069-updated-1701632595.local:8009 (10.0.0.3) 00:27:27.310 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:27.310 cookie is 0 00:27:27.310 is_local: 1 00:27:27.310 our_own: 0 00:27:27.310 wide_area: 0 00:27:27.310 multicast: 1 00:27:27.310 cached: 1 00:27:27.568 [2024-05-15 10:08:04.776766] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:27.568 [2024-05-15 10:08:04.776822] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1701806725-069-updated-1701632595.local:8009 (10.0.0.3) 00:27:27.568 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:27.568 cookie is 0 00:27:27.568 is_local: 1 00:27:27.568 our_own: 0 00:27:27.568 wide_area: 0 00:27:27.568 multicast: 1 00:27:27.568 cached: 1 00:27:27.568 [2024-05-15 10:08:04.776856] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:27.568 [2024-05-15 10:08:04.876767] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:27.568 [2024-05-15 10:08:04.876827] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1701806725-069-updated-1701632595.local:8009 (10.0.0.2) 00:27:27.568 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:27.568 cookie is 0 00:27:27.568 is_local: 1 00:27:27.568 our_own: 0 00:27:27.568 wide_area: 0 00:27:27.568 multicast: 1 00:27:27.568 cached: 1 00:27:27.826 [2024-05-15 10:08:04.976782] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:27.826 [2024-05-15 10:08:04.976845] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1701806725-069-updated-1701632595.local:8009 (10.0.0.2) 00:27:27.826 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:27.826 cookie is 0 00:27:27.826 is_local: 1 00:27:27.826 our_own: 0 00:27:27.826 wide_area: 0 00:27:27.826 multicast: 1 00:27:27.826 cached: 1 00:27:27.826 [2024-05-15 10:08:04.976880] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:28.423 [2024-05-15 10:08:05.689042] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:28.423 [2024-05-15 10:08:05.689139] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:28.423 [2024-05-15 10:08:05.689179] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:28.423 [2024-05-15 10:08:05.777205] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:27:28.681 [2024-05-15 10:08:05.841049] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:28.681 [2024-05-15 10:08:05.841102] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:28.681 [2024-05-15 10:08:05.888597] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:28.681 [2024-05-15 10:08:05.888636] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:28.681 [2024-05-15 10:08:05.888650] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:28.681 [2024-05-15 10:08:05.974694] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:27:28.681 [2024-05-15 10:08:06.029983] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:28.681 [2024-05-15 10:08:06.030020] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:31.214 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:31.473 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.474 10:08:08 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:27:32.410 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:27:32.410 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:32.410 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:32.410 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.410 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:32.410 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.410 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.410 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.669 [2024-05-15 10:08:09.868232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:32.669 [2024-05-15 10:08:09.869122] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:32.669 [2024-05-15 10:08:09.869163] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:32.669 [2024-05-15 10:08:09.869200] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:32.669 [2024-05-15 10:08:09.869212] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.669 [2024-05-15 10:08:09.880166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:32.669 [2024-05-15 10:08:09.881102] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:32.669 [2024-05-15 10:08:09.881155] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:32.669 10:08:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:27:32.669 [2024-05-15 10:08:10.014325] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:27:32.669 [2024-05-15 10:08:10.014595] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:27:32.928 [2024-05-15 10:08:10.071681] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:32.928 [2024-05-15 10:08:10.071736] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:32.928 [2024-05-15 10:08:10.071745] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:32.928 [2024-05-15 10:08:10.071768] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:32.928 [2024-05-15 10:08:10.071811] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:32.928 [2024-05-15 10:08:10.071820] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:32.928 [2024-05-15 10:08:10.071827] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:32.928 [2024-05-15 10:08:10.071841] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:32.928 [2024-05-15 10:08:10.117423] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:32.928 [2024-05-15 10:08:10.117467] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:32.928 [2024-05-15 10:08:10.117522] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:32.928 [2024-05-15 10:08:10.117530] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:33.520 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:27:33.520 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:33.520 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:33.520 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:33.520 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.520 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:33.520 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:33.779 10:08:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:33.779 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:33.780 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.040 [2024-05-15 10:08:11.253946] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:34.040 [2024-05-15 10:08:11.253997] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:34.040 [2024-05-15 10:08:11.254034] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:34.040 [2024-05-15 10:08:11.254047] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:34.040 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:34.040 [2024-05-15 10:08:11.261236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.040 [2024-05-15 10:08:11.261279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.040 [2024-05-15 10:08:11.261294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.040 [2024-05-15 10:08:11.261305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.040 [2024-05-15 10:08:11.261316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.040 [2024-05-15 10:08:11.261327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.040 [2024-05-15 10:08:11.261338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.040 [2024-05-15 10:08:11.261348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.040 [2024-05-15 10:08:11.261359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.040 [2024-05-15 10:08:11.265978] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:34.040 [2024-05-15 10:08:11.266031] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:34.040 [2024-05-15 10:08:11.268588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.040 [2024-05-15 10:08:11.268626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.040 [2024-05-15 10:08:11.268639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.040 [2024-05-15 10:08:11.268649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.040 [2024-05-15 10:08:11.268660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.040 [2024-05-15 10:08:11.268670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.040 [2024-05-15 10:08:11.268681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:34.040 [2024-05-15 10:08:11.268691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:34.041 [2024-05-15 10:08:11.268701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.041 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:34.041 10:08:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:27:34.041 [2024-05-15 10:08:11.271170] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.278551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.281196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.041 [2024-05-15 10:08:11.281357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.281415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.281431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.041 [2024-05-15 10:08:11.281445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.041 [2024-05-15 10:08:11.281462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.281479] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.041 [2024-05-15 10:08:11.281490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.041 [2024-05-15 10:08:11.281503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.041 [2024-05-15 10:08:11.281520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.041 [2024-05-15 10:08:11.288562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.041 [2024-05-15 10:08:11.288655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.288693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.288706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.041 [2024-05-15 10:08:11.288718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.041 [2024-05-15 10:08:11.288733] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.288749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.041 [2024-05-15 10:08:11.288775] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.041 [2024-05-15 10:08:11.288786] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.041 [2024-05-15 10:08:11.288801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.041 [2024-05-15 10:08:11.291266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.041 [2024-05-15 10:08:11.291344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.291386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.291400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.041 [2024-05-15 10:08:11.291410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.041 [2024-05-15 10:08:11.291435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.291450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.041 [2024-05-15 10:08:11.291459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.041 [2024-05-15 10:08:11.291469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.041 [2024-05-15 10:08:11.291482] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.041 [2024-05-15 10:08:11.298625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.041 [2024-05-15 10:08:11.298726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.298769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.298784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.041 [2024-05-15 10:08:11.298798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.041 [2024-05-15 10:08:11.298813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.298829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.041 [2024-05-15 10:08:11.298839] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.041 [2024-05-15 10:08:11.298850] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.041 [2024-05-15 10:08:11.298864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.041 [2024-05-15 10:08:11.301315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.041 [2024-05-15 10:08:11.301401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.301440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.301454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.041 [2024-05-15 10:08:11.301466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.041 [2024-05-15 10:08:11.301483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.301498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.041 [2024-05-15 10:08:11.301507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.041 [2024-05-15 10:08:11.301518] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.041 [2024-05-15 10:08:11.301532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.041 [2024-05-15 10:08:11.308708] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.041 [2024-05-15 10:08:11.308791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.308829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.308842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.041 [2024-05-15 10:08:11.308853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.041 [2024-05-15 10:08:11.308868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.308883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.041 [2024-05-15 10:08:11.308893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.041 [2024-05-15 10:08:11.308904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.041 [2024-05-15 10:08:11.308918] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.041 [2024-05-15 10:08:11.311364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.041 [2024-05-15 10:08:11.311445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.311486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.311500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.041 [2024-05-15 10:08:11.311511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.041 [2024-05-15 10:08:11.311527] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.041 [2024-05-15 10:08:11.311541] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.041 [2024-05-15 10:08:11.311551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.041 [2024-05-15 10:08:11.311562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.041 [2024-05-15 10:08:11.311576] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.041 [2024-05-15 10:08:11.318767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.041 [2024-05-15 10:08:11.318880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.041 [2024-05-15 10:08:11.318921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.318936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.042 [2024-05-15 10:08:11.318949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.042 [2024-05-15 10:08:11.318965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.042 [2024-05-15 10:08:11.318997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.042 [2024-05-15 10:08:11.319008] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.042 [2024-05-15 10:08:11.319019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.042 [2024-05-15 10:08:11.319033] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.042 [2024-05-15 10:08:11.321419] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.042 [2024-05-15 10:08:11.321505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.321546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.321560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.042 [2024-05-15 10:08:11.321571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.042 [2024-05-15 10:08:11.321586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.042 [2024-05-15 10:08:11.321601] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.042 [2024-05-15 10:08:11.321611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.042 [2024-05-15 10:08:11.321638] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.042 [2024-05-15 10:08:11.321652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.042 [2024-05-15 10:08:11.328831] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.042 [2024-05-15 10:08:11.328938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.328979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.328992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.042 [2024-05-15 10:08:11.329004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.042 [2024-05-15 10:08:11.329020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.042 [2024-05-15 10:08:11.329071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.042 [2024-05-15 10:08:11.329083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.042 [2024-05-15 10:08:11.329095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.042 [2024-05-15 10:08:11.329119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.042 [2024-05-15 10:08:11.331472] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.042 [2024-05-15 10:08:11.331544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.331582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.331595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.042 [2024-05-15 10:08:11.331606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.042 [2024-05-15 10:08:11.331621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.042 [2024-05-15 10:08:11.331635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.042 [2024-05-15 10:08:11.331644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.042 [2024-05-15 10:08:11.331655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.042 [2024-05-15 10:08:11.331669] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.042 [2024-05-15 10:08:11.338894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.042 [2024-05-15 10:08:11.338971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.339011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.339025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.042 [2024-05-15 10:08:11.339035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.042 [2024-05-15 10:08:11.339050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.042 [2024-05-15 10:08:11.339078] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.042 [2024-05-15 10:08:11.339097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.042 [2024-05-15 10:08:11.339107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.042 [2024-05-15 10:08:11.339120] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.042 [2024-05-15 10:08:11.341517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.042 [2024-05-15 10:08:11.341579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.341615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.341628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.042 [2024-05-15 10:08:11.341638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.042 [2024-05-15 10:08:11.341651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.042 [2024-05-15 10:08:11.341664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.042 [2024-05-15 10:08:11.341674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.042 [2024-05-15 10:08:11.341683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.042 [2024-05-15 10:08:11.341695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.042 [2024-05-15 10:08:11.348946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.042 [2024-05-15 10:08:11.349015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.349054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.349067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.042 [2024-05-15 10:08:11.349078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.042 [2024-05-15 10:08:11.349101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.042 [2024-05-15 10:08:11.349132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.042 [2024-05-15 10:08:11.349142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.042 [2024-05-15 10:08:11.349153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.042 [2024-05-15 10:08:11.349166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.042 [2024-05-15 10:08:11.351560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.042 [2024-05-15 10:08:11.351623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.351660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.042 [2024-05-15 10:08:11.351674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.042 [2024-05-15 10:08:11.351685] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.042 [2024-05-15 10:08:11.351699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.042 [2024-05-15 10:08:11.351713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.042 [2024-05-15 10:08:11.351722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.042 [2024-05-15 10:08:11.351732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.042 [2024-05-15 10:08:11.351745] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.358997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.043 [2024-05-15 10:08:11.359077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.359168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.359183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.043 [2024-05-15 10:08:11.359210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.043 [2024-05-15 10:08:11.359228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.043 [2024-05-15 10:08:11.359263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.043 [2024-05-15 10:08:11.359273] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.043 [2024-05-15 10:08:11.359283] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.043 [2024-05-15 10:08:11.359297] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.361602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.043 [2024-05-15 10:08:11.361671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.361709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.361722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.043 [2024-05-15 10:08:11.361733] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.043 [2024-05-15 10:08:11.361747] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.043 [2024-05-15 10:08:11.361761] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.043 [2024-05-15 10:08:11.361771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.043 [2024-05-15 10:08:11.361781] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.043 [2024-05-15 10:08:11.361794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.369046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.043 [2024-05-15 10:08:11.369132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.369168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.369181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.043 [2024-05-15 10:08:11.369192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.043 [2024-05-15 10:08:11.369206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.043 [2024-05-15 10:08:11.369219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.043 [2024-05-15 10:08:11.369228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.043 [2024-05-15 10:08:11.369238] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.043 [2024-05-15 10:08:11.369251] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.371647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.043 [2024-05-15 10:08:11.371711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.371748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.371761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.043 [2024-05-15 10:08:11.371771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.043 [2024-05-15 10:08:11.371785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.043 [2024-05-15 10:08:11.371799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.043 [2024-05-15 10:08:11.371809] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.043 [2024-05-15 10:08:11.371818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.043 [2024-05-15 10:08:11.371831] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.379086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.043 [2024-05-15 10:08:11.379175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.379212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.379224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.043 [2024-05-15 10:08:11.379234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.043 [2024-05-15 10:08:11.379248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.043 [2024-05-15 10:08:11.379262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.043 [2024-05-15 10:08:11.379271] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.043 [2024-05-15 10:08:11.379281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.043 [2024-05-15 10:08:11.379307] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.381693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.043 [2024-05-15 10:08:11.381765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.381799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.381811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.043 [2024-05-15 10:08:11.381821] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.043 [2024-05-15 10:08:11.381834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.043 [2024-05-15 10:08:11.381847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.043 [2024-05-15 10:08:11.381856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.043 [2024-05-15 10:08:11.381865] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.043 [2024-05-15 10:08:11.381877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.389131] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:34.043 [2024-05-15 10:08:11.389221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.389256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.389269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4bca0 with addr=10.0.0.3, port=4420 00:27:34.043 [2024-05-15 10:08:11.389279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4bca0 is same with the state(5) to be set 00:27:34.043 [2024-05-15 10:08:11.389293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4bca0 (9): Bad file descriptor 00:27:34.043 [2024-05-15 10:08:11.389321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:34.043 [2024-05-15 10:08:11.389331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:34.043 [2024-05-15 10:08:11.389340] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:34.043 [2024-05-15 10:08:11.389353] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.391731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:34.043 [2024-05-15 10:08:11.391791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.391826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:34.043 [2024-05-15 10:08:11.391838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf6c260 with addr=10.0.0.2, port=4420 00:27:34.043 [2024-05-15 10:08:11.391848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c260 is same with the state(5) to be set 00:27:34.043 [2024-05-15 10:08:11.391861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6c260 (9): Bad file descriptor 00:27:34.043 [2024-05-15 10:08:11.391874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.043 [2024-05-15 10:08:11.391883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:34.043 [2024-05-15 10:08:11.391893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.043 [2024-05-15 10:08:11.391905] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.043 [2024-05-15 10:08:11.396342] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:27:34.044 [2024-05-15 10:08:11.396372] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:34.044 [2024-05-15 10:08:11.396413] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:34.044 [2024-05-15 10:08:11.396444] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:34.044 [2024-05-15 10:08:11.396459] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:34.044 [2024-05-15 10:08:11.396471] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.302 [2024-05-15 10:08:11.482420] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:34.302 [2024-05-15 10:08:11.482495] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:35.238 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:35.239 10:08:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:27:35.497 [2024-05-15 10:08:12.676852] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:36.432 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@649 -- # local es=0 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:36.433 [2024-05-15 10:08:13.804220] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:36.433 2024/05/15 10:08:13 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:36.433 request: 00:27:36.433 { 00:27:36.433 "method": "bdev_nvme_start_mdns_discovery", 00:27:36.433 "params": { 00:27:36.433 "name": "mdns", 00:27:36.433 "svcname": "_nvme-disc._http", 00:27:36.433 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:36.433 } 00:27:36.433 } 00:27:36.433 Got JSON-RPC error response 00:27:36.433 GoRPCClient: error on JSON-RPC call 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:36.433 10:08:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:37.367 [2024-05-15 10:08:14.392777] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:37.367 [2024-05-15 10:08:14.492765] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:37.367 [2024-05-15 10:08:14.592779] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:37.367 [2024-05-15 10:08:14.592813] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1701806725-069-updated-1701632595.local:8009 (10.0.0.3) 00:27:37.367 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:37.367 cookie is 0 00:27:37.367 is_local: 1 00:27:37.367 our_own: 0 00:27:37.367 wide_area: 0 00:27:37.367 multicast: 1 00:27:37.367 cached: 1 00:27:37.367 [2024-05-15 10:08:14.692786] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:37.367 [2024-05-15 10:08:14.692830] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1701806725-069-updated-1701632595.local:8009 (10.0.0.3) 00:27:37.367 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:37.367 cookie is 0 00:27:37.367 is_local: 1 00:27:37.367 our_own: 0 00:27:37.367 wide_area: 0 00:27:37.367 multicast: 1 00:27:37.367 cached: 1 00:27:37.367 [2024-05-15 10:08:14.692847] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:27:37.625 [2024-05-15 10:08:14.792790] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:27:37.625 [2024-05-15 10:08:14.792853] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1701806725-069-updated-1701632595.local:8009 (10.0.0.2) 00:27:37.625 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:37.625 cookie is 0 00:27:37.625 is_local: 1 00:27:37.625 our_own: 0 00:27:37.625 wide_area: 0 00:27:37.625 multicast: 1 00:27:37.625 cached: 1 00:27:37.625 [2024-05-15 10:08:14.892819] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:27:37.625 [2024-05-15 10:08:14.892891] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1701806725-069-updated-1701632595.local:8009 (10.0.0.2) 00:27:37.625 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:27:37.625 cookie is 0 00:27:37.625 is_local: 1 00:27:37.625 our_own: 0 00:27:37.625 wide_area: 0 00:27:37.625 multicast: 1 00:27:37.625 cached: 1 00:27:37.625 [2024-05-15 10:08:14.892916] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:27:38.559 [2024-05-15 10:08:15.600804] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:38.559 [2024-05-15 10:08:15.600865] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:38.559 [2024-05-15 10:08:15.600882] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:38.559 [2024-05-15 10:08:15.686938] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:27:38.559 [2024-05-15 10:08:15.746656] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:38.559 [2024-05-15 10:08:15.746723] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:38.559 [2024-05-15 10:08:15.800826] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:38.559 [2024-05-15 10:08:15.800892] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:38.559 [2024-05-15 10:08:15.800916] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:38.559 [2024-05-15 10:08:15.886961] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:27:38.817 [2024-05-15 10:08:15.946713] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:38.818 [2024-05-15 10:08:15.946780] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@649 -- # local es=0 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:27:42.102 10:08:18 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.102 [2024-05-15 10:08:19.009033] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:42.102 2024/05/15 10:08:19 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:42.102 request: 00:27:42.102 { 00:27:42.102 "method": "bdev_nvme_start_mdns_discovery", 00:27:42.102 "params": { 00:27:42.102 "name": "cdc", 00:27:42.102 "svcname": "_nvme-disc._tcp", 00:27:42.102 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:42.102 } 00:27:42.102 } 00:27:42.102 Got JSON-RPC error response 00:27:42.102 GoRPCClient: error on JSON-RPC call 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # es=1 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:27:42.102 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 93943 00:27:42.103 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 93943 00:27:42.103 [2024-05-15 10:08:19.202069] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 93972 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:27:42.362 Got SIGTERM, quitting. 00:27:42.362 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:42.362 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:42.362 avahi-daemon 0.8 exiting. 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:42.362 rmmod nvme_tcp 00:27:42.362 rmmod nvme_fabrics 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93893 ']' 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93893 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@947 -- # '[' -z 93893 ']' 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # kill -0 93893 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # uname 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 93893 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:42.362 killing process with pid 93893 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 93893' 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # kill 93893 00:27:42.362 [2024-05-15 10:08:19.610598] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:42.362 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@971 -- # wait 93893 00:27:42.620 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.620 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.620 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.620 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.620 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.620 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.620 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.620 10:08:19 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.879 10:08:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:42.879 00:27:42.879 real 0m21.399s 00:27:42.879 user 0m40.717s 00:27:42.879 sys 0m3.124s 00:27:42.879 10:08:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:42.879 10:08:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:42.879 ************************************ 00:27:42.879 END TEST nvmf_mdns_discovery 00:27:42.879 ************************************ 00:27:42.879 10:08:20 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:27:42.879 10:08:20 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:42.879 10:08:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:42.879 10:08:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:42.879 10:08:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:42.879 ************************************ 00:27:42.879 START TEST nvmf_host_multipath 00:27:42.879 ************************************ 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:42.879 * Looking for test storage... 00:27:42.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.879 10:08:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:42.880 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:43.138 Cannot find device "nvmf_tgt_br" 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:43.138 Cannot find device "nvmf_tgt_br2" 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:43.138 Cannot find device "nvmf_tgt_br" 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:43.138 Cannot find device "nvmf_tgt_br2" 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:43.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:43.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:43.138 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:43.396 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:43.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:27:43.397 00:27:43.397 --- 10.0.0.2 ping statistics --- 00:27:43.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.397 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:43.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:43.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:27:43.397 00:27:43.397 --- 10.0.0.3 ping statistics --- 00:27:43.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.397 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:43.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:27:43.397 00:27:43.397 --- 10.0.0.1 ping statistics --- 00:27:43.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.397 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94539 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94539 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@828 -- # '[' -z 94539 ']' 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:43.397 10:08:20 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:43.397 [2024-05-15 10:08:20.707920] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:27:43.397 [2024-05-15 10:08:20.708045] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.654 [2024-05-15 10:08:20.854414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:43.654 [2024-05-15 10:08:21.031394] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.654 [2024-05-15 10:08:21.031822] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.654 [2024-05-15 10:08:21.032011] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.654 [2024-05-15 10:08:21.032256] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.654 [2024-05-15 10:08:21.033203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.654 [2024-05-15 10:08:21.033600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.654 [2024-05-15 10:08:21.033616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.587 10:08:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:44.587 10:08:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@861 -- # return 0 00:27:44.587 10:08:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:44.587 10:08:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:44.587 10:08:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:44.587 10:08:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.587 10:08:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94539 00:27:44.587 10:08:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:44.587 [2024-05-15 10:08:21.965729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.913 10:08:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:44.913 Malloc0 00:27:44.913 10:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:45.495 10:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:45.753 10:08:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.753 [2024-05-15 10:08:23.135468] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:45.753 [2024-05-15 10:08:23.136309] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:46.012 [2024-05-15 10:08:23.359855] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:46.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94643 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94643 /var/tmp/bdevperf.sock 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@828 -- # '[' -z 94643 ']' 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:46.012 10:08:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:47.388 10:08:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:47.388 10:08:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@861 -- # return 0 00:27:47.388 10:08:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:47.657 10:08:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:47.940 Nvme0n1 00:27:47.940 10:08:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:48.198 Nvme0n1 00:27:48.456 10:08:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:48.456 10:08:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:49.390 10:08:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:49.390 10:08:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:49.648 10:08:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:49.906 10:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:49.906 10:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94731 00:27:49.906 10:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:49.906 10:08:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:56.464 Attaching 4 probes... 00:27:56.464 @path[10.0.0.2, 4421]: 18871 00:27:56.464 @path[10.0.0.2, 4421]: 19006 00:27:56.464 @path[10.0.0.2, 4421]: 19327 00:27:56.464 @path[10.0.0.2, 4421]: 20247 00:27:56.464 @path[10.0.0.2, 4421]: 20030 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94731 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:56.464 10:08:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:56.722 10:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:56.722 10:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94866 00:27:56.722 10:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:56.722 10:08:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:03.286 Attaching 4 probes... 00:28:03.286 @path[10.0.0.2, 4420]: 19328 00:28:03.286 @path[10.0.0.2, 4420]: 19436 00:28:03.286 @path[10.0.0.2, 4420]: 19877 00:28:03.286 @path[10.0.0.2, 4420]: 19379 00:28:03.286 @path[10.0.0.2, 4420]: 19312 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94866 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:28:03.286 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:03.568 10:08:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:03.832 10:08:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:28:03.832 10:08:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95000 00:28:03.832 10:08:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:03.832 10:08:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:10.460 Attaching 4 probes... 00:28:10.460 @path[10.0.0.2, 4421]: 18544 00:28:10.460 @path[10.0.0.2, 4421]: 18883 00:28:10.460 @path[10.0.0.2, 4421]: 19137 00:28:10.460 @path[10.0.0.2, 4421]: 18859 00:28:10.460 @path[10.0.0.2, 4421]: 19091 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95000 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:10.460 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:10.719 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:28:10.719 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95135 00:28:10.719 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:10.719 10:08:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:17.291 10:08:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:17.291 10:08:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:17.291 Attaching 4 probes... 00:28:17.291 00:28:17.291 00:28:17.291 00:28:17.291 00:28:17.291 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95135 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:17.291 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:17.549 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:17.549 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95262 00:28:17.549 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:17.549 10:08:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:24.188 10:09:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:24.188 10:09:00 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:24.188 Attaching 4 probes... 00:28:24.188 @path[10.0.0.2, 4421]: 19062 00:28:24.188 @path[10.0.0.2, 4421]: 19224 00:28:24.188 @path[10.0.0.2, 4421]: 19208 00:28:24.188 @path[10.0.0.2, 4421]: 19105 00:28:24.188 @path[10.0.0.2, 4421]: 18505 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95262 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:24.188 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:24.188 [2024-05-15 10:09:01.434683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.434991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.188 [2024-05-15 10:09:01.435220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435638] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435852] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435862] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.435998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x180b900 is same with the state(5) to be set 00:28:24.189 [2024-05-15 10:09:01.456282] ctrlr.c: 827:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:af540e93-dc42-4b67-b1dc-c25b9e6cb44a' to connect at this address. 00:28:24.189 10:09:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:25.125 10:09:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:25.125 10:09:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95397 00:28:25.125 10:09:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:25.125 10:09:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:31.685 Attaching 4 probes... 00:28:31.685 @path[10.0.0.2, 4420]: 18054 00:28:31.685 @path[10.0.0.2, 4420]: 18578 00:28:31.685 @path[10.0.0.2, 4420]: 19072 00:28:31.685 @path[10.0.0.2, 4420]: 18847 00:28:31.685 @path[10.0.0.2, 4420]: 18688 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95397 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:31.685 10:09:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:31.944 [2024-05-15 10:09:09.110694] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:31.944 10:09:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:32.202 10:09:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:38.810 10:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:38.810 10:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95587 00:28:38.810 10:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:38.810 10:09:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:44.076 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:44.076 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:44.333 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:44.333 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:44.333 Attaching 4 probes... 00:28:44.333 @path[10.0.0.2, 4421]: 15612 00:28:44.333 @path[10.0.0.2, 4421]: 17414 00:28:44.333 @path[10.0.0.2, 4421]: 17816 00:28:44.333 @path[10.0.0.2, 4421]: 18099 00:28:44.333 @path[10.0.0.2, 4421]: 17484 00:28:44.333 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:44.333 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95587 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94643 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@947 -- # '[' -z 94643 ']' 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # kill -0 94643 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # uname 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 94643 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 94643' 00:28:44.591 killing process with pid 94643 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # kill 94643 00:28:44.591 10:09:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@971 -- # wait 94643 00:28:44.591 Connection closed with partial response: 00:28:44.591 00:28:44.591 00:28:44.858 10:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94643 00:28:44.858 10:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:44.858 [2024-05-15 10:08:23.430775] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:28:44.858 [2024-05-15 10:08:23.430901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94643 ] 00:28:44.858 [2024-05-15 10:08:23.568763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.858 [2024-05-15 10:08:23.751602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.858 Running I/O for 90 seconds... 00:28:44.858 [2024-05-15 10:08:34.058976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.858 [2024-05-15 10:08:34.059071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.059174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.059193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.059217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.059233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.059255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.059271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.059293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.059308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.059330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.059345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.059367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.059382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.059403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.059418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:44.858 [2024-05-15 10:08:34.060878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.858 [2024-05-15 10:08:34.060894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.060916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.060931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.060953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.060968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.060990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.859 [2024-05-15 10:08:34.061960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.061983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.061998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.062019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.062035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.062057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.062072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.062113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.062130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.062153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.062168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.062190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.062206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.062227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.859 [2024-05-15 10:08:34.062243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:44.859 [2024-05-15 10:08:34.062265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.062302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.062339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.062376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.062413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.062451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.062488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.062525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.062574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.062591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.063983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.063999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.860 [2024-05-15 10:08:34.064604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:44.860 [2024-05-15 10:08:34.064626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.064967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.064983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:34.065510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:34.065525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.747738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.747819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.861 [2024-05-15 10:08:40.748489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:44.861 [2024-05-15 10:08:40.748509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.748966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.748980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.749016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.749052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.862 [2024-05-15 10:08:40.749087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.862 [2024-05-15 10:08:40.749630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:44.862 [2024-05-15 10:08:40.749650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.749665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.749685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.749700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.749720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.749734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.749755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.749769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.751797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.751820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.751846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.751861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.751884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.751899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.751922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.751937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.751960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.751974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.751997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.863 [2024-05-15 10:08:40.752333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.863 [2024-05-15 10:08:40.752370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.863 [2024-05-15 10:08:40.752408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.752982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.752996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.753017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.753030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.753052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.753070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.753092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.753115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.753137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.753151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.753173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.753186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.753208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.753221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.863 [2024-05-15 10:08:40.753243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.863 [2024-05-15 10:08:40.753256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.753278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.753291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.753313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.753326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.753347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.753360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.756262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.756983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.756997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.757033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.757069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.757113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.864 [2024-05-15 10:08:40.757149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.757185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.757221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.757257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.757298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.757334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.757370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.757393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.757407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.758472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.758491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.758517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.758531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.758556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.758569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.758593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.758607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:44.864 [2024-05-15 10:08:40.758632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.864 [2024-05-15 10:08:40.758645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:40.758669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:40.758682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:40.758707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:40.758720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:40.758744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:40.758758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:40.758782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:40.758802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.899723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.899802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.899866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.899885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.899907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.899923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.899946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.899962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.899983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.899999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.900020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.900036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.900057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.900072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.900104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.900120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.900142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.900157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.901965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.901996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.865 [2024-05-15 10:08:47.902456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:44.865 [2024-05-15 10:08:47.902476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.902977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.902998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.903829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.903843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.904010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.904028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.904056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.904071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.904107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.866 [2024-05-15 10:08:47.904123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.866 [2024-05-15 10:08:47.904150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.867 [2024-05-15 10:08:47.904744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:08:47.904785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:08:47.904826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:08:47.904867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:08:47.904908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:08:47.904948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.904974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:08:47.904989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:08:47.905015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:08:47.905035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.436443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.436600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.436622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.436645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.436660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.436683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.436699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.436720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.436735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.436757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.436772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.436794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.436809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.437164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.437186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.437203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.437218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.437235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.437250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.437266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.437281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.437297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.437311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.867 [2024-05-15 10:09:01.437346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.867 [2024-05-15 10:09:01.437361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.437393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.437425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:43200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.437978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.437994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:43224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:43240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:43248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:43256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:43272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:43280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:43296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:43312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.868 [2024-05-15 10:09:01.438420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.438452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.438483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.438519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.438550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.438586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.438618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.438648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.868 [2024-05-15 10:09:01.438680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.868 [2024-05-15 10:09:01.438696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.438984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.438999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.439970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.439986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.869 [2024-05-15 10:09:01.440003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.869 [2024-05-15 10:09:01.440019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.870 [2024-05-15 10:09:01.440450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.870 [2024-05-15 10:09:01.440481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.870 [2024-05-15 10:09:01.440514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.440573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43336 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.440587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.440627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.440639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43344 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.440654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.440680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.440691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43352 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.440705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.440733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.440745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43360 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.440759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.440784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.440795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43368 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.440811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.440837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.440848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43376 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.440862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.440888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.440899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43384 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.440913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.440939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.440950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43392 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.440964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.440982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.440993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.441004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43400 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.441019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.441039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.441050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.441061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43408 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.441076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.441103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.441115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.441126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43416 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.441141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.441160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.441170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.441182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43424 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.441197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.441212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.441223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.441234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43432 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.441248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.441264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.441274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.441286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43440 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.441300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.441315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.870 [2024-05-15 10:09:01.441326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.870 [2024-05-15 10:09:01.441337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43448 len:8 PRP1 0x0 PRP2 0x0 00:28:44.870 [2024-05-15 10:09:01.448991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.870 [2024-05-15 10:09:01.449029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.871 [2024-05-15 10:09:01.449042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.871 [2024-05-15 10:09:01.449054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43456 len:8 PRP1 0x0 PRP2 0x0 00:28:44.871 [2024-05-15 10:09:01.449070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.871 [2024-05-15 10:09:01.449101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:44.871 [2024-05-15 10:09:01.449115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:44.871 [2024-05-15 10:09:01.449127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:43464 len:8 PRP1 0x0 PRP2 0x0 00:28:44.871 [2024-05-15 10:09:01.449157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.871 [2024-05-15 10:09:01.449248] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f81310 was disconnected and freed. reset controller. 00:28:44.871 [2024-05-15 10:09:01.449441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.871 [2024-05-15 10:09:01.449468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.871 [2024-05-15 10:09:01.449486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.871 [2024-05-15 10:09:01.449503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.871 [2024-05-15 10:09:01.449520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.871 [2024-05-15 10:09:01.449536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.871 [2024-05-15 10:09:01.449553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:44.871 [2024-05-15 10:09:01.449570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.871 [2024-05-15 10:09:01.449588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.871 [2024-05-15 10:09:01.449605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.871 [2024-05-15 10:09:01.449630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f539f0 is same with the state(5) to be set 00:28:44.871 [2024-05-15 10:09:01.450939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.871 [2024-05-15 10:09:01.450986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f539f0 (9): Bad file descriptor 00:28:44.871 [2024-05-15 10:09:01.458476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:4 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:44.871 [2024-05-15 10:09:01.458523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:44.871 [2024-05-15 10:09:01.458557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4421 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:44.871 [2024-05-15 10:09:01.458575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:44.871 [2024-05-15 10:09:01.458593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:44.871 [2024-05-15 10:09:01.458609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f539f0 00:28:44.871 [2024-05-15 10:09:01.458670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f539f0 (9): Bad file descriptor 00:28:44.871 [2024-05-15 10:09:01.458698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.871 [2024-05-15 10:09:01.458714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.871 [2024-05-15 10:09:01.458732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.871 [2024-05-15 10:09:01.458763] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.871 [2024-05-15 10:09:01.458779] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.871 [2024-05-15 10:09:11.477596] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:44.871 Received shutdown signal, test time was about 55.996916 seconds 00:28:44.871 00:28:44.871 Latency(us) 00:28:44.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.871 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:44.871 Verification LBA range: start 0x0 length 0x4000 00:28:44.871 Nvme0n1 : 56.00 8091.32 31.61 0.00 0.00 15794.76 108.74 7030452.42 00:28:44.871 =================================================================================================================== 00:28:44.871 Total : 8091.32 31.61 0.00 0.00 15794.76 108.74 7030452.42 00:28:44.871 10:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.129 10:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:45.129 10:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:45.129 10:09:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:45.129 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:45.129 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:45.412 rmmod nvme_tcp 00:28:45.412 rmmod nvme_fabrics 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94539 ']' 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94539 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@947 -- # '[' -z 94539 ']' 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # kill -0 94539 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # uname 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 94539 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # echo 'killing process with pid 94539' 00:28:45.412 killing process with pid 94539 00:28:45.412 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # kill 94539 00:28:45.413 [2024-05-15 10:09:22.630316] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:45.413 10:09:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@971 -- # wait 94539 00:28:45.670 10:09:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:45.670 10:09:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:45.670 10:09:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:45.670 10:09:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.670 10:09:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:45.670 10:09:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.670 10:09:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.670 10:09:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.928 10:09:23 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:45.928 00:28:45.928 real 1m2.990s 00:28:45.928 user 2m55.046s 00:28:45.928 sys 0m18.327s 00:28:45.928 10:09:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:45.928 10:09:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:45.928 ************************************ 00:28:45.928 END TEST nvmf_host_multipath 00:28:45.928 ************************************ 00:28:45.928 10:09:23 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:45.928 10:09:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:45.928 10:09:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:45.928 10:09:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:45.928 ************************************ 00:28:45.928 START TEST nvmf_timeout 00:28:45.928 ************************************ 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:45.928 * Looking for test storage... 00:28:45.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.928 10:09:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:45.929 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:46.188 Cannot find device "nvmf_tgt_br" 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:46.188 Cannot find device "nvmf_tgt_br2" 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:46.188 Cannot find device "nvmf_tgt_br" 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:46.188 Cannot find device "nvmf_tgt_br2" 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:46.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:46.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:46.188 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:46.189 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:46.189 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:46.189 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:46.189 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:46.189 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:46.189 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:46.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:28:46.449 00:28:46.449 --- 10.0.0.2 ping statistics --- 00:28:46.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.449 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:46.449 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:46.449 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:28:46.449 00:28:46.449 --- 10.0.0.3 ping statistics --- 00:28:46.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.449 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:46.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:28:46.449 00:28:46.449 --- 10.0.0.1 ping statistics --- 00:28:46.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.449 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95910 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95910 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 95910 ']' 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:46.449 10:09:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:46.450 [2024-05-15 10:09:23.769196] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:28:46.450 [2024-05-15 10:09:23.769298] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.707 [2024-05-15 10:09:23.912880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:46.707 [2024-05-15 10:09:24.079382] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.707 [2024-05-15 10:09:24.079489] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.707 [2024-05-15 10:09:24.079506] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:46.707 [2024-05-15 10:09:24.079519] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:46.707 [2024-05-15 10:09:24.079532] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.707 [2024-05-15 10:09:24.079628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.707 [2024-05-15 10:09:24.079641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.639 10:09:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:47.639 10:09:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:28:47.639 10:09:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.639 10:09:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:47.639 10:09:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:47.639 10:09:24 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.639 10:09:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:47.639 10:09:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:47.896 [2024-05-15 10:09:25.073651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.896 10:09:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:48.152 Malloc0 00:28:48.152 10:09:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.467 10:09:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.724 10:09:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.981 [2024-05-15 10:09:26.142389] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:48.981 [2024-05-15 10:09:26.142734] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96007 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96007 /var/tmp/bdevperf.sock 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 96007 ']' 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:48.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:48.981 10:09:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:48.981 [2024-05-15 10:09:26.227066] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:28:48.981 [2024-05-15 10:09:26.227234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96007 ] 00:28:49.239 [2024-05-15 10:09:26.373371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.239 [2024-05-15 10:09:26.555595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.173 10:09:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:50.173 10:09:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:28:50.173 10:09:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:50.431 10:09:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:50.689 NVMe0n1 00:28:50.689 10:09:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96055 00:28:50.689 10:09:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:50.689 10:09:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:50.689 Running I/O for 10 seconds... 00:28:51.622 10:09:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.920 [2024-05-15 10:09:29.220158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.920 [2024-05-15 10:09:29.220779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.920 [2024-05-15 10:09:29.220791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.220985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.220999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.921 [2024-05-15 10:09:29.221367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.921 [2024-05-15 10:09:29.221667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.921 [2024-05-15 10:09:29.221680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.221980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.221991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.922 [2024-05-15 10:09:29.222119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:51.922 [2024-05-15 10:09:29.222335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.922 [2024-05-15 10:09:29.222485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.922 [2024-05-15 10:09:29.222496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.222974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.222991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.923 [2024-05-15 10:09:29.223320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x80e030 is same with the state(5) to be set 00:28:51.923 [2024-05-15 10:09:29.223347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:51.923 [2024-05-15 10:09:29.223356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:51.923 [2024-05-15 10:09:29.223366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87448 len:8 PRP1 0x0 PRP2 0x0 00:28:51.923 [2024-05-15 10:09:29.223377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:51.923 [2024-05-15 10:09:29.223449] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x80e030 was disconnected and freed. reset controller. 00:28:51.923 [2024-05-15 10:09:29.223701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.923 [2024-05-15 10:09:29.223791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79ca00 (9): Bad file descriptor 00:28:51.923 [2024-05-15 10:09:29.223907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.923 [2024-05-15 10:09:29.223951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.924 [2024-05-15 10:09:29.223966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79ca00 with addr=10.0.0.2, port=4420 00:28:51.924 [2024-05-15 10:09:29.223977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ca00 is same with the state(5) to be set 00:28:51.924 [2024-05-15 10:09:29.223995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79ca00 (9): Bad file descriptor 00:28:51.924 [2024-05-15 10:09:29.224012] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.924 [2024-05-15 10:09:29.224028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.924 [2024-05-15 10:09:29.224049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.924 [2024-05-15 10:09:29.224079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.924 [2024-05-15 10:09:29.224106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.924 10:09:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:54.451 [2024-05-15 10:09:31.224498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.451 [2024-05-15 10:09:31.224659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.451 [2024-05-15 10:09:31.224682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79ca00 with addr=10.0.0.2, port=4420 00:28:54.451 [2024-05-15 10:09:31.224707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ca00 is same with the state(5) to be set 00:28:54.451 [2024-05-15 10:09:31.224773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79ca00 (9): Bad file descriptor 00:28:54.451 [2024-05-15 10:09:31.224813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.451 [2024-05-15 10:09:31.224834] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.451 [2024-05-15 10:09:31.224860] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.451 [2024-05-15 10:09:31.224918] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.451 [2024-05-15 10:09:31.224942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.451 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:54.451 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:54.451 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:54.451 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:54.451 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:54.451 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:54.451 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:54.708 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:54.708 10:09:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:56.081 [2024-05-15 10:09:33.225227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.081 [2024-05-15 10:09:33.225377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.081 [2024-05-15 10:09:33.225399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x79ca00 with addr=10.0.0.2, port=4420 00:28:56.081 [2024-05-15 10:09:33.225421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ca00 is same with the state(5) to be set 00:28:56.081 [2024-05-15 10:09:33.225465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79ca00 (9): Bad file descriptor 00:28:56.081 [2024-05-15 10:09:33.225492] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:56.081 [2024-05-15 10:09:33.225508] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:56.081 [2024-05-15 10:09:33.225527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:56.081 [2024-05-15 10:09:33.225568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.081 [2024-05-15 10:09:33.225587] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:57.979 [2024-05-15 10:09:35.225682] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:58.913 00:28:58.913 Latency(us) 00:28:58.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.913 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:58.913 Verification LBA range: start 0x0 length 0x4000 00:28:58.913 NVMe0n1 : 8.21 1322.64 5.17 15.60 0.00 95681.90 2543.42 7030452.42 00:28:58.913 =================================================================================================================== 00:28:58.913 Total : 1322.64 5.17 15.60 0.00 95681.90 2543.42 7030452.42 00:28:58.913 0 00:28:59.845 10:09:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:59.845 10:09:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:59.845 10:09:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:00.109 10:09:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:29:00.109 10:09:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:29:00.109 10:09:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:29:00.109 10:09:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96055 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96007 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 96007 ']' 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 96007 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 96007 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:29:00.381 killing process with pid 96007 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 96007' 00:29:00.381 Received shutdown signal, test time was about 9.571656 seconds 00:29:00.381 00:29:00.381 Latency(us) 00:29:00.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.381 =================================================================================================================== 00:29:00.381 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 96007 00:29:00.381 10:09:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 96007 00:29:00.639 10:09:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.897 [2024-05-15 10:09:38.167046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96213 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96213 /var/tmp/bdevperf.sock 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 96213 ']' 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:00.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:00.897 10:09:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:00.897 [2024-05-15 10:09:38.234987] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:29:00.897 [2024-05-15 10:09:38.235107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96213 ] 00:29:01.155 [2024-05-15 10:09:38.373832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.413 [2024-05-15 10:09:38.553067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:01.978 10:09:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:01.978 10:09:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:29:01.978 10:09:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:02.545 10:09:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:29:02.803 NVMe0n1 00:29:02.803 10:09:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96266 00:29:02.803 10:09:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:02.803 10:09:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:29:03.061 Running I/O for 10 seconds... 00:29:03.997 10:09:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.259 [2024-05-15 10:09:41.445321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.259 [2024-05-15 10:09:41.445399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.259 [2024-05-15 10:09:41.445430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.259 [2024-05-15 10:09:41.445442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.259 [2024-05-15 10:09:41.445457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.259 [2024-05-15 10:09:41.445468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.259 [2024-05-15 10:09:41.445481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.259 [2024-05-15 10:09:41.445494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.445977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.445988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.260 [2024-05-15 10:09:41.446394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.260 [2024-05-15 10:09:41.446418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.260 [2024-05-15 10:09:41.446441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.260 [2024-05-15 10:09:41.446454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.260 [2024-05-15 10:09:41.446464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.261 [2024-05-15 10:09:41.446487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.261 [2024-05-15 10:09:41.446511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.261 [2024-05-15 10:09:41.446537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.261 [2024-05-15 10:09:41.446561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.261 [2024-05-15 10:09:41.446584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.261 [2024-05-15 10:09:41.446608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.446979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.446992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.261 [2024-05-15 10:09:41.447470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.261 [2024-05-15 10:09:41.447482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.262 [2024-05-15 10:09:41.447778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.447830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79712 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.447841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.447867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.447877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79720 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.447887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.447907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.447916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.447926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.447946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.447955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79736 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.447965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.447976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.447986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.447995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79744 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79752 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79760 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79768 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79776 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79784 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78880 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.262 [2024-05-15 10:09:41.448434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.262 [2024-05-15 10:09:41.448443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78888 len:8 PRP1 0x0 PRP2 0x0 00:29:04.262 [2024-05-15 10:09:41.448454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.262 [2024-05-15 10:09:41.448465] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78896 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78904 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78912 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78920 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78928 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78936 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78944 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78952 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78960 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78976 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.448935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.263 [2024-05-15 10:09:41.448944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.263 [2024-05-15 10:09:41.448953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78992 len:8 PRP1 0x0 PRP2 0x0 00:29:04.263 [2024-05-15 10:09:41.448963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.449053] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b3df10 was disconnected and freed. reset controller. 00:29:04.263 [2024-05-15 10:09:41.449185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.263 [2024-05-15 10:09:41.449208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.449222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.263 [2024-05-15 10:09:41.449233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.449245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.263 [2024-05-15 10:09:41.449256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.449267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.263 [2024-05-15 10:09:41.449280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.263 [2024-05-15 10:09:41.449291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acca00 is same with the state(5) to be set 00:29:04.263 [2024-05-15 10:09:41.449531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.263 [2024-05-15 10:09:41.449561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acca00 (9): Bad file descriptor 00:29:04.263 [2024-05-15 10:09:41.449673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.263 [2024-05-15 10:09:41.449724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.263 [2024-05-15 10:09:41.449739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acca00 with addr=10.0.0.2, port=4420 00:29:04.263 [2024-05-15 10:09:41.449751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acca00 is same with the state(5) to be set 00:29:04.263 [2024-05-15 10:09:41.449769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acca00 (9): Bad file descriptor 00:29:04.263 [2024-05-15 10:09:41.449786] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.263 [2024-05-15 10:09:41.449797] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.263 [2024-05-15 10:09:41.449810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.263 [2024-05-15 10:09:41.449829] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.263 [2024-05-15 10:09:41.449840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.263 10:09:41 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:29:05.198 [2024-05-15 10:09:42.450055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.198 [2024-05-15 10:09:42.450198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.198 [2024-05-15 10:09:42.450215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acca00 with addr=10.0.0.2, port=4420 00:29:05.198 [2024-05-15 10:09:42.450233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acca00 is same with the state(5) to be set 00:29:05.198 [2024-05-15 10:09:42.450269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acca00 (9): Bad file descriptor 00:29:05.198 [2024-05-15 10:09:42.450290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.198 [2024-05-15 10:09:42.450301] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.198 [2024-05-15 10:09:42.450315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.198 [2024-05-15 10:09:42.450347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.198 [2024-05-15 10:09:42.450361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.198 10:09:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.457 [2024-05-15 10:09:42.746467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.457 10:09:42 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96266 00:29:06.432 [2024-05-15 10:09:43.468496] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:13.023 00:29:13.023 Latency(us) 00:29:13.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.023 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:13.023 Verification LBA range: start 0x0 length 0x4000 00:29:13.023 NVMe0n1 : 10.01 6602.77 25.79 0.00 0.00 19350.33 1521.37 3019898.88 00:29:13.023 =================================================================================================================== 00:29:13.023 Total : 6602.77 25.79 0.00 0.00 19350.33 1521.37 3019898.88 00:29:13.023 0 00:29:13.023 10:09:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96383 00:29:13.023 10:09:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:13.023 10:09:50 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:29:13.283 Running I/O for 10 seconds... 00:29:14.228 10:09:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:14.489 [2024-05-15 10:09:51.670024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.489 [2024-05-15 10:09:51.670344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670530] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.670549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a41840 is same with the state(5) to be set 00:29:14.490 [2024-05-15 10:09:51.671868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.671918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.671946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.671958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.671973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.671984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.671997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.490 [2024-05-15 10:09:51.672327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.490 [2024-05-15 10:09:51.672721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.490 [2024-05-15 10:09:51.672739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.672754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.672771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.672787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.672805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.672821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.672840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.672856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.672874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.672895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.672921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.672942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.672959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.672975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.672992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.673970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.673985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.674003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.674018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.674036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.674052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.674069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.674085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.674110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.491 [2024-05-15 10:09:51.674127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.491 [2024-05-15 10:09:51.674145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.492 [2024-05-15 10:09:51.674456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.674981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.674992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.675004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.675015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.675028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.675038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.675050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.675061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.675074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.675084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.675110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.675121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.675134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.675153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.675166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.675176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.492 [2024-05-15 10:09:51.675189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.492 [2024-05-15 10:09:51.675209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.493 [2024-05-15 10:09:51.675563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:14.493 [2024-05-15 10:09:51.675610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:14.493 [2024-05-15 10:09:51.675619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83848 len:8 PRP1 0x0 PRP2 0x0 00:29:14.493 [2024-05-15 10:09:51.675630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:14.493 [2024-05-15 10:09:51.675709] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b1c6f0 was disconnected and freed. reset controller. 00:29:14.493 [2024-05-15 10:09:51.676084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.493 [2024-05-15 10:09:51.676259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acca00 (9): Bad file descriptor 00:29:14.493 [2024-05-15 10:09:51.676550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.493 [2024-05-15 10:09:51.676695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:14.493 [2024-05-15 10:09:51.676807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acca00 with addr=10.0.0.2, port=4420 00:29:14.493 [2024-05-15 10:09:51.676910] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acca00 is same with the state(5) to be set 00:29:14.493 [2024-05-15 10:09:51.677074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acca00 (9): Bad file descriptor 00:29:14.493 [2024-05-15 10:09:51.677158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:14.493 [2024-05-15 10:09:51.677293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:14.493 [2024-05-15 10:09:51.677349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:14.493 [2024-05-15 10:09:51.677397] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:14.493 [2024-05-15 10:09:51.677433] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:14.493 10:09:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:29:15.426 [2024-05-15 10:09:52.677741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.426 [2024-05-15 10:09:52.678138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.426 [2024-05-15 10:09:52.678266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acca00 with addr=10.0.0.2, port=4420 00:29:15.426 [2024-05-15 10:09:52.678392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acca00 is same with the state(5) to be set 00:29:15.426 [2024-05-15 10:09:52.678521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acca00 (9): Bad file descriptor 00:29:15.426 [2024-05-15 10:09:52.678703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.426 [2024-05-15 10:09:52.678821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.426 [2024-05-15 10:09:52.678948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.426 [2024-05-15 10:09:52.679189] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.426 [2024-05-15 10:09:52.679356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:16.358 [2024-05-15 10:09:53.679739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.358 [2024-05-15 10:09:53.680221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:16.358 [2024-05-15 10:09:53.680341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acca00 with addr=10.0.0.2, port=4420 00:29:16.358 [2024-05-15 10:09:53.680540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acca00 is same with the state(5) to be set 00:29:16.358 [2024-05-15 10:09:53.680714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acca00 (9): Bad file descriptor 00:29:16.358 [2024-05-15 10:09:53.680952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:16.358 [2024-05-15 10:09:53.681126] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:16.358 [2024-05-15 10:09:53.681317] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:16.358 [2024-05-15 10:09:53.681465] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:16.358 [2024-05-15 10:09:53.681603] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.731 [2024-05-15 10:09:54.682271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.731 [2024-05-15 10:09:54.682680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.731 [2024-05-15 10:09:54.682812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acca00 with addr=10.0.0.2, port=4420 00:29:17.731 [2024-05-15 10:09:54.682929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acca00 is same with the state(5) to be set 00:29:17.731 [2024-05-15 10:09:54.683336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acca00 (9): Bad file descriptor 00:29:17.731 [2024-05-15 10:09:54.683709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.731 [2024-05-15 10:09:54.683849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.731 [2024-05-15 10:09:54.684038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.731 [2024-05-15 10:09:54.687870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.731 [2024-05-15 10:09:54.688118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.731 10:09:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:17.731 [2024-05-15 10:09:55.007226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.731 10:09:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96383 00:29:18.665 [2024-05-15 10:09:55.723795] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:23.931 00:29:23.931 Latency(us) 00:29:23.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.931 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:23.931 Verification LBA range: start 0x0 length 0x4000 00:29:23.931 NVMe0n1 : 10.01 5535.41 21.62 3786.95 0.00 13701.44 596.85 3019898.88 00:29:23.931 =================================================================================================================== 00:29:23.931 Total : 5535.41 21.62 3786.95 0.00 13701.44 0.00 3019898.88 00:29:23.931 0 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96213 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 96213 ']' 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 96213 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 96213 00:29:23.931 killing process with pid 96213 00:29:23.931 Received shutdown signal, test time was about 10.000000 seconds 00:29:23.931 00:29:23.931 Latency(us) 00:29:23.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.931 =================================================================================================================== 00:29:23.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 96213' 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 96213 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 96213 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96504 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96504 /var/tmp/bdevperf.sock 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@828 -- # '[' -z 96504 ']' 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:23.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:23.931 10:10:00 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:23.931 [2024-05-15 10:10:01.001729] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:29:23.931 [2024-05-15 10:10:01.002143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96504 ] 00:29:23.931 [2024-05-15 10:10:01.149978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.931 [2024-05-15 10:10:01.313017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.866 10:10:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:24.866 10:10:02 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@861 -- # return 0 00:29:24.866 10:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96532 00:29:24.866 10:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96504 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:24.866 10:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:25.123 10:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:25.381 NVMe0n1 00:29:25.381 10:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:25.381 10:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96587 00:29:25.381 10:10:02 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:29:25.381 Running I/O for 10 seconds... 00:29:26.313 10:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.574 [2024-05-15 10:10:03.800172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.800478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.800594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.800651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.800781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.800839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.800938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.801963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802182] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802876] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.802988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.803978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.804119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.804187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.804348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.574 [2024-05-15 10:10:03.804452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.804505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.804559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.804612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.804720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.804779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.804834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.804946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.805894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.806972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.807188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.807407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.807474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.807631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.807705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.807770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.807913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.807974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.808918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.809022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.809155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.809263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.809324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.809417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.809475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a43c00 is same with the state(5) to be set 00:29:26.575 [2024-05-15 10:10:03.809968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.810195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.810364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.810509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.810575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.810675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.810735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.810864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.810959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.811053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.811173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.811411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.811597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.811781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.811965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.812284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.812383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.812445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.812504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.812599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.812663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.812799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.813008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.813216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.813372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.575 [2024-05-15 10:10:03.813530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.575 [2024-05-15 10:10:03.813738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.813915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.814081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.814314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.814507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.814685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.814863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.815036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.815241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.815423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.815577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.815772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.815938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.816963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.816984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.576 [2024-05-15 10:10:03.817786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.576 [2024-05-15 10:10:03.817810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.817832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.817856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.817876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.817901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.817920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.817943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.817963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.817986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:31440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.818971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.818995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.577 [2024-05-15 10:10:03.819382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.577 [2024-05-15 10:10:03.819400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.819964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.819987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.578 [2024-05-15 10:10:03.820741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.578 [2024-05-15 10:10:03.820766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.579 [2024-05-15 10:10:03.820786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.579 [2024-05-15 10:10:03.820811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.579 [2024-05-15 10:10:03.820825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.579 [2024-05-15 10:10:03.820843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.579 [2024-05-15 10:10:03.820858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.579 [2024-05-15 10:10:03.820875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66a030 is same with the state(5) to be set 00:29:26.579 [2024-05-15 10:10:03.820900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:26.579 [2024-05-15 10:10:03.820912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:26.579 [2024-05-15 10:10:03.820925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88136 len:8 PRP1 0x0 PRP2 0x0 00:29:26.579 [2024-05-15 10:10:03.820944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.579 [2024-05-15 10:10:03.821061] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x66a030 was disconnected and freed. reset controller. 00:29:26.579 [2024-05-15 10:10:03.821254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.579 [2024-05-15 10:10:03.821281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.579 [2024-05-15 10:10:03.821300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.579 [2024-05-15 10:10:03.821318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.579 [2024-05-15 10:10:03.821334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.579 [2024-05-15 10:10:03.821351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.579 [2024-05-15 10:10:03.821370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:26.579 [2024-05-15 10:10:03.821387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:26.579 [2024-05-15 10:10:03.821401] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f8a00 is same with the state(5) to be set 00:29:26.579 [2024-05-15 10:10:03.821763] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.579 [2024-05-15 10:10:03.821802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f8a00 (9): Bad file descriptor 00:29:26.579 [2024-05-15 10:10:03.821959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.579 [2024-05-15 10:10:03.822031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.579 [2024-05-15 10:10:03.822053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f8a00 with addr=10.0.0.2, port=4420 00:29:26.579 [2024-05-15 10:10:03.822069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f8a00 is same with the state(5) to be set 00:29:26.579 [2024-05-15 10:10:03.822119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f8a00 (9): Bad file descriptor 00:29:26.579 [2024-05-15 10:10:03.822145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.579 [2024-05-15 10:10:03.822160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.579 [2024-05-15 10:10:03.822177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.579 [2024-05-15 10:10:03.822207] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.579 [2024-05-15 10:10:03.822222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.579 10:10:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96587 00:29:28.479 [2024-05-15 10:10:05.822467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.479 [2024-05-15 10:10:05.822583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.479 [2024-05-15 10:10:05.822600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f8a00 with addr=10.0.0.2, port=4420 00:29:28.479 [2024-05-15 10:10:05.822617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f8a00 is same with the state(5) to be set 00:29:28.479 [2024-05-15 10:10:05.822647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f8a00 (9): Bad file descriptor 00:29:28.479 [2024-05-15 10:10:05.822668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.480 [2024-05-15 10:10:05.822679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.480 [2024-05-15 10:10:05.822703] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.480 [2024-05-15 10:10:05.822735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.480 [2024-05-15 10:10:05.822747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:30.456 [2024-05-15 10:10:07.822992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-05-15 10:10:07.823131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.456 [2024-05-15 10:10:07.823150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5f8a00 with addr=10.0.0.2, port=4420 00:29:30.456 [2024-05-15 10:10:07.823167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5f8a00 is same with the state(5) to be set 00:29:30.456 [2024-05-15 10:10:07.823197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f8a00 (9): Bad file descriptor 00:29:30.456 [2024-05-15 10:10:07.823227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:30.456 [2024-05-15 10:10:07.823238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:30.456 [2024-05-15 10:10:07.823251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:30.456 [2024-05-15 10:10:07.823284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:30.456 [2024-05-15 10:10:07.823297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:32.988 [2024-05-15 10:10:09.823419] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.555 00:29:33.555 Latency(us) 00:29:33.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.555 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:33.555 NVMe0n1 : 8.16 2492.61 9.74 15.69 0.00 50953.75 3838.54 7030452.42 00:29:33.555 =================================================================================================================== 00:29:33.555 Total : 2492.61 9.74 15.69 0.00 50953.75 3838.54 7030452.42 00:29:33.555 0 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:33.556 Attaching 5 probes... 00:29:33.556 1136.965360: reset bdev controller NVMe0 00:29:33.556 1137.070709: reconnect bdev controller NVMe0 00:29:33.556 3137.506692: reconnect delay bdev controller NVMe0 00:29:33.556 3137.537175: reconnect bdev controller NVMe0 00:29:33.556 5138.031424: reconnect delay bdev controller NVMe0 00:29:33.556 5138.061201: reconnect bdev controller NVMe0 00:29:33.556 7138.582464: reconnect delay bdev controller NVMe0 00:29:33.556 7138.613465: reconnect bdev controller NVMe0 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96532 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96504 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 96504 ']' 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 96504 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 96504 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 96504' 00:29:33.556 killing process with pid 96504 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 96504 00:29:33.556 Received shutdown signal, test time was about 8.231233 seconds 00:29:33.556 00:29:33.556 Latency(us) 00:29:33.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.556 =================================================================================================================== 00:29:33.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.556 10:10:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 96504 00:29:34.123 10:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:34.381 rmmod nvme_tcp 00:29:34.381 rmmod nvme_fabrics 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95910 ']' 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95910 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@947 -- # '[' -z 95910 ']' 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # kill -0 95910 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # uname 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 95910 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # echo 'killing process with pid 95910' 00:29:34.381 killing process with pid 95910 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # kill 95910 00:29:34.381 [2024-05-15 10:10:11.686162] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:34.381 10:10:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@971 -- # wait 95910 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:34.948 00:29:34.948 real 0m49.030s 00:29:34.948 user 2m23.117s 00:29:34.948 sys 0m6.778s 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:34.948 10:10:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:34.948 ************************************ 00:29:34.948 END TEST nvmf_timeout 00:29:34.948 ************************************ 00:29:34.948 10:10:12 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:29:34.948 10:10:12 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:34.948 10:10:12 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:34.948 10:10:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.948 10:10:12 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:34.948 00:29:34.948 real 16m23.721s 00:29:34.948 user 43m19.853s 00:29:34.948 sys 4m16.199s 00:29:34.948 10:10:12 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:34.948 10:10:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:34.948 ************************************ 00:29:34.948 END TEST nvmf_tcp 00:29:34.948 ************************************ 00:29:34.948 10:10:12 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:29:34.948 10:10:12 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:34.948 10:10:12 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:34.948 10:10:12 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:34.948 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:29:34.948 ************************************ 00:29:34.948 START TEST spdkcli_nvmf_tcp 00:29:34.948 ************************************ 00:29:34.948 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:35.207 * Looking for test storage... 00:29:35.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96803 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96803 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 96803 ']' 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:35.207 10:10:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:35.207 [2024-05-15 10:10:12.503967] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:29:35.207 [2024-05-15 10:10:12.504062] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96803 ] 00:29:35.464 [2024-05-15 10:10:12.654536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:35.464 [2024-05-15 10:10:12.825195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.464 [2024-05-15 10:10:12.825200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.397 10:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:36.398 10:10:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:36.398 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:36.398 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:36.398 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:36.398 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:36.398 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:36.398 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:36.398 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:36.398 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:36.398 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:36.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:36.398 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:36.398 ' 00:29:39.680 [2024-05-15 10:10:16.391538] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.614 [2024-05-15 10:10:17.672392] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:40.614 [2024-05-15 10:10:17.672806] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:43.159 [2024-05-15 10:10:20.030412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:45.063 [2024-05-15 10:10:22.083995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:46.488 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:46.488 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:46.488 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:46.488 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:46.488 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:46.488 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:46.488 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:46.488 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:46.488 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:46.488 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:46.488 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:46.488 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:46.488 10:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:46.488 10:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:46.488 10:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:46.488 10:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:46.488 10:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:46.488 10:10:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:46.488 10:10:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:46.488 10:10:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:47.056 10:10:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:47.056 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:47.056 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:47.056 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:47.056 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:47.056 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:47.056 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:47.056 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:47.056 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:47.056 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:47.056 ' 00:29:52.381 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:52.381 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:52.381 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:52.381 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:52.381 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:52.381 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:52.381 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:52.381 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:52.381 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:52.381 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:52.381 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:52.381 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:52.381 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:52.381 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96803 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 96803 ']' 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 96803 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:52.638 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 96803 00:29:52.639 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:52.639 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:52.639 killing process with pid 96803 00:29:52.639 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 96803' 00:29:52.639 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 96803 00:29:52.639 [2024-05-15 10:10:29.863321] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:52.639 10:10:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 96803 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96803 ']' 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96803 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 96803 ']' 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 96803 00:29:52.896 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (96803) - No such process 00:29:52.896 Process with pid 96803 is not found 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 96803 is not found' 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:52.896 00:29:52.896 real 0m17.919s 00:29:52.896 user 0m38.493s 00:29:52.896 sys 0m1.217s 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:52.896 10:10:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.896 ************************************ 00:29:52.896 END TEST spdkcli_nvmf_tcp 00:29:52.896 ************************************ 00:29:52.896 10:10:30 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:52.896 10:10:30 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:52.896 10:10:30 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:52.896 10:10:30 -- common/autotest_common.sh@10 -- # set +x 00:29:53.154 ************************************ 00:29:53.154 START TEST nvmf_identify_passthru 00:29:53.154 ************************************ 00:29:53.154 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:53.154 * Looking for test storage... 00:29:53.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:53.154 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:53.154 10:10:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.154 10:10:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.154 10:10:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.154 10:10:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.154 10:10:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.154 10:10:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.154 10:10:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:53.154 10:10:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:53.154 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:53.155 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:53.155 10:10:30 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:53.155 10:10:30 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:53.155 10:10:30 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:53.155 10:10:30 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.155 10:10:30 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.155 10:10:30 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.155 10:10:30 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:53.155 10:10:30 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:53.155 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:53.155 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:53.155 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:53.155 Cannot find device "nvmf_tgt_br" 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:53.155 Cannot find device "nvmf_tgt_br2" 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:53.155 Cannot find device "nvmf_tgt_br" 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:53.155 Cannot find device "nvmf_tgt_br2" 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:53.155 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:53.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:53.413 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:53.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:29:53.413 00:29:53.413 --- 10.0.0.2 ping statistics --- 00:29:53.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.413 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:53.413 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:53.413 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:29:53.413 00:29:53.413 --- 10.0.0.3 ping statistics --- 00:29:53.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.413 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:53.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:29:53.413 00:29:53.413 --- 10.0.0.1 ping statistics --- 00:29:53.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.413 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:53.413 10:10:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:53.670 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.670 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:53.670 10:10:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:00:10.0 00:29:53.670 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:53.670 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:53.670 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:53.670 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:53.670 10:10:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:53.926 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:53.926 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:53.926 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:53.926 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:53.926 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:53.926 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:53.926 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:53.926 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:53.926 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:53.926 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:53.926 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:54.182 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97305 00:29:54.182 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:54.182 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:54.182 10:10:31 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97305 00:29:54.182 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 97305 ']' 00:29:54.182 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.182 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:54.182 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.182 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:54.182 10:10:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:54.182 [2024-05-15 10:10:31.378958] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:29:54.182 [2024-05-15 10:10:31.380031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.182 [2024-05-15 10:10:31.531715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:54.439 [2024-05-15 10:10:31.702797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.439 [2024-05-15 10:10:31.703107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.439 [2024-05-15 10:10:31.703242] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.439 [2024-05-15 10:10:31.703300] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.439 [2024-05-15 10:10:31.703332] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.439 [2024-05-15 10:10:31.703460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.439 [2024-05-15 10:10:31.704379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.439 [2024-05-15 10:10:31.704469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.439 [2024-05-15 10:10:31.704469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.004 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:55.004 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:29:55.004 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:55.004 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.004 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.261 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.261 [2024-05-15 10:10:32.530594] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.261 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.261 [2024-05-15 10:10:32.545122] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.261 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.261 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.261 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.518 Nvme0n1 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.518 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.518 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.518 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.518 [2024-05-15 10:10:32.688415] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:55.518 [2024-05-15 10:10:32.689238] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.518 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:55.518 [ 00:29:55.518 { 00:29:55.518 "allow_any_host": true, 00:29:55.518 "hosts": [], 00:29:55.518 "listen_addresses": [], 00:29:55.518 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:55.518 "subtype": "Discovery" 00:29:55.518 }, 00:29:55.518 { 00:29:55.518 "allow_any_host": true, 00:29:55.518 "hosts": [], 00:29:55.518 "listen_addresses": [ 00:29:55.518 { 00:29:55.518 "adrfam": "IPv4", 00:29:55.518 "traddr": "10.0.0.2", 00:29:55.518 "trsvcid": "4420", 00:29:55.518 "trtype": "TCP" 00:29:55.518 } 00:29:55.518 ], 00:29:55.518 "max_cntlid": 65519, 00:29:55.518 "max_namespaces": 1, 00:29:55.518 "min_cntlid": 1, 00:29:55.518 "model_number": "SPDK bdev Controller", 00:29:55.518 "namespaces": [ 00:29:55.518 { 00:29:55.518 "bdev_name": "Nvme0n1", 00:29:55.518 "name": "Nvme0n1", 00:29:55.518 "nguid": "11BF8ED777AC4221B4F543B655E408BC", 00:29:55.518 "nsid": 1, 00:29:55.518 "uuid": "11bf8ed7-77ac-4221-b4f5-43b655e408bc" 00:29:55.518 } 00:29:55.518 ], 00:29:55.518 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:55.518 "serial_number": "SPDK00000000000001", 00:29:55.518 "subtype": "NVMe" 00:29:55.518 } 00:29:55.518 ] 00:29:55.518 10:10:32 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:55.518 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:55.518 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:55.518 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:55.775 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:55.775 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:55.775 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:55.775 10:10:32 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:56.032 10:10:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:56.032 10:10:33 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:56.032 10:10:33 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:56.032 10:10:33 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:56.032 10:10:33 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:56.032 10:10:33 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:56.032 rmmod nvme_tcp 00:29:56.032 rmmod nvme_fabrics 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97305 ']' 00:29:56.032 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97305 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 97305 ']' 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 97305 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 97305 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 97305' 00:29:56.032 killing process with pid 97305 00:29:56.032 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 97305 00:29:56.032 [2024-05-15 10:10:33.389772] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 97305 00:29:56.032 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:56.600 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:56.600 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:56.600 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:56.600 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:56.600 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:56.600 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.600 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:56.600 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.600 10:10:33 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:56.600 00:29:56.600 real 0m3.567s 00:29:56.600 user 0m8.328s 00:29:56.600 sys 0m0.997s 00:29:56.600 ************************************ 00:29:56.600 END TEST nvmf_identify_passthru 00:29:56.600 ************************************ 00:29:56.600 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:56.600 10:10:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:56.600 10:10:33 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:56.600 10:10:33 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:56.600 10:10:33 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:56.600 10:10:33 -- common/autotest_common.sh@10 -- # set +x 00:29:56.600 ************************************ 00:29:56.600 START TEST nvmf_dif 00:29:56.600 ************************************ 00:29:56.600 10:10:33 nvmf_dif -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:56.858 * Looking for test storage... 00:29:56.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:56.858 10:10:34 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:56.858 10:10:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:56.858 10:10:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:56.859 10:10:34 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.859 10:10:34 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.859 10:10:34 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.859 10:10:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.859 10:10:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.859 10:10:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.859 10:10:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:56.859 10:10:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.859 10:10:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:56.859 10:10:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:56.859 10:10:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:56.859 10:10:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:56.859 10:10:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.859 10:10:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:56.859 10:10:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:56.859 Cannot find device "nvmf_tgt_br" 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:56.859 Cannot find device "nvmf_tgt_br2" 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:56.859 Cannot find device "nvmf_tgt_br" 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:56.859 Cannot find device "nvmf_tgt_br2" 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:56.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:56.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:56.859 10:10:34 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:57.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:29:57.118 00:29:57.118 --- 10.0.0.2 ping statistics --- 00:29:57.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.118 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:57.118 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:57.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:29:57.118 00:29:57.118 --- 10.0.0.3 ping statistics --- 00:29:57.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.118 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:57.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:29:57.118 00:29:57.118 --- 10.0.0.1 ping statistics --- 00:29:57.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.118 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:57.118 10:10:34 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:57.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:57.686 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:57.686 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:57.686 10:10:34 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:57.686 10:10:34 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.686 10:10:34 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:57.686 10:10:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97651 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:57.686 10:10:34 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97651 00:29:57.686 10:10:34 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 97651 ']' 00:29:57.686 10:10:34 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.686 10:10:34 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:57.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.686 10:10:34 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.686 10:10:34 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:57.686 10:10:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:57.686 [2024-05-15 10:10:34.940593] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:29:57.686 [2024-05-15 10:10:34.940711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.944 [2024-05-15 10:10:35.077543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.944 [2024-05-15 10:10:35.248449] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.944 [2024-05-15 10:10:35.248518] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.944 [2024-05-15 10:10:35.248533] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.944 [2024-05-15 10:10:35.248547] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.944 [2024-05-15 10:10:35.248559] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.944 [2024-05-15 10:10:35.248609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:29:58.880 10:10:36 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:58.880 10:10:36 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.880 10:10:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:58.880 10:10:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:58.880 [2024-05-15 10:10:36.126246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.880 10:10:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:58.880 10:10:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:58.880 ************************************ 00:29:58.880 START TEST fio_dif_1_default 00:29:58.880 ************************************ 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:58.880 bdev_null0 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:58.880 [2024-05-15 10:10:36.170171] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:58.880 [2024-05-15 10:10:36.170456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.880 { 00:29:58.880 "params": { 00:29:58.880 "name": "Nvme$subsystem", 00:29:58.880 "trtype": "$TEST_TRANSPORT", 00:29:58.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.880 "adrfam": "ipv4", 00:29:58.880 "trsvcid": "$NVMF_PORT", 00:29:58.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.880 "hdgst": ${hdgst:-false}, 00:29:58.880 "ddgst": ${ddgst:-false} 00:29:58.880 }, 00:29:58.880 "method": "bdev_nvme_attach_controller" 00:29:58.880 } 00:29:58.880 EOF 00:29:58.880 )") 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:58.880 10:10:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:58.881 "params": { 00:29:58.881 "name": "Nvme0", 00:29:58.881 "trtype": "tcp", 00:29:58.881 "traddr": "10.0.0.2", 00:29:58.881 "adrfam": "ipv4", 00:29:58.881 "trsvcid": "4420", 00:29:58.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:58.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:58.881 "hdgst": false, 00:29:58.881 "ddgst": false 00:29:58.881 }, 00:29:58.881 "method": "bdev_nvme_attach_controller" 00:29:58.881 }' 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib= 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:58.881 10:10:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:59.139 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:59.139 fio-3.35 00:29:59.139 Starting 1 thread 00:30:11.341 00:30:11.341 filename0: (groupid=0, jobs=1): err= 0: pid=97741: Wed May 15 10:10:47 2024 00:30:11.342 read: IOPS=1416, BW=5667KiB/s (5803kB/s)(55.3MiB/10001msec) 00:30:11.342 slat (nsec): min=6157, max=89822, avg=8327.74, stdev=3906.63 00:30:11.342 clat (usec): min=359, max=43901, avg=2799.75, stdev=9393.36 00:30:11.342 lat (usec): min=366, max=43945, avg=2808.08, stdev=9393.65 00:30:11.342 clat percentiles (usec): 00:30:11.342 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 424], 20.00th=[ 445], 00:30:11.342 | 30.00th=[ 469], 40.00th=[ 490], 50.00th=[ 502], 60.00th=[ 515], 00:30:11.342 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[40633], 00:30:11.342 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:30:11.342 | 99.99th=[43779] 00:30:11.342 bw ( KiB/s): min= 2112, max=11616, per=99.42%, avg=5634.47, stdev=2309.45, samples=19 00:30:11.342 iops : min= 528, max= 2904, avg=1408.58, stdev=577.36, samples=19 00:30:11.342 lat (usec) : 500=48.82%, 750=45.31%, 1000=0.12% 00:30:11.342 lat (msec) : 2=0.08%, 50=5.67% 00:30:11.342 cpu : usr=83.94%, sys=15.03%, ctx=36, majf=0, minf=9 00:30:11.342 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:11.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.342 issued rwts: total=14168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.342 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:11.342 00:30:11.342 Run status group 0 (all jobs): 00:30:11.342 READ: bw=5667KiB/s (5803kB/s), 5667KiB/s-5667KiB/s (5803kB/s-5803kB/s), io=55.3MiB (58.0MB), run=10001-10001msec 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 00:30:11.342 real 0m11.220s 00:30:11.342 user 0m9.196s 00:30:11.342 sys 0m1.862s 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:11.342 ************************************ 00:30:11.342 END TEST fio_dif_1_default 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 ************************************ 00:30:11.342 10:10:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:11.342 10:10:47 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:11.342 10:10:47 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 ************************************ 00:30:11.342 START TEST fio_dif_1_multi_subsystems 00:30:11.342 ************************************ 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 bdev_null0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 [2024-05-15 10:10:47.457070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 bdev_null1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:11.342 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:11.342 { 00:30:11.342 "params": { 00:30:11.342 "name": "Nvme$subsystem", 00:30:11.342 "trtype": "$TEST_TRANSPORT", 00:30:11.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.342 "adrfam": "ipv4", 00:30:11.342 "trsvcid": "$NVMF_PORT", 00:30:11.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.343 "hdgst": ${hdgst:-false}, 00:30:11.343 "ddgst": ${ddgst:-false} 00:30:11.343 }, 00:30:11.343 "method": "bdev_nvme_attach_controller" 00:30:11.343 } 00:30:11.343 EOF 00:30:11.343 )") 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:11.343 { 00:30:11.343 "params": { 00:30:11.343 "name": "Nvme$subsystem", 00:30:11.343 "trtype": "$TEST_TRANSPORT", 00:30:11.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.343 "adrfam": "ipv4", 00:30:11.343 "trsvcid": "$NVMF_PORT", 00:30:11.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.343 "hdgst": ${hdgst:-false}, 00:30:11.343 "ddgst": ${ddgst:-false} 00:30:11.343 }, 00:30:11.343 "method": "bdev_nvme_attach_controller" 00:30:11.343 } 00:30:11.343 EOF 00:30:11.343 )") 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:11.343 "params": { 00:30:11.343 "name": "Nvme0", 00:30:11.343 "trtype": "tcp", 00:30:11.343 "traddr": "10.0.0.2", 00:30:11.343 "adrfam": "ipv4", 00:30:11.343 "trsvcid": "4420", 00:30:11.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.343 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.343 "hdgst": false, 00:30:11.343 "ddgst": false 00:30:11.343 }, 00:30:11.343 "method": "bdev_nvme_attach_controller" 00:30:11.343 },{ 00:30:11.343 "params": { 00:30:11.343 "name": "Nvme1", 00:30:11.343 "trtype": "tcp", 00:30:11.343 "traddr": "10.0.0.2", 00:30:11.343 "adrfam": "ipv4", 00:30:11.343 "trsvcid": "4420", 00:30:11.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:11.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:11.343 "hdgst": false, 00:30:11.343 "ddgst": false 00:30:11.343 }, 00:30:11.343 "method": "bdev_nvme_attach_controller" 00:30:11.343 }' 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:11.343 10:10:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.343 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:11.343 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:11.343 fio-3.35 00:30:11.343 Starting 2 threads 00:30:21.376 00:30:21.376 filename0: (groupid=0, jobs=1): err= 0: pid=97900: Wed May 15 10:10:58 2024 00:30:21.376 read: IOPS=332, BW=1330KiB/s (1362kB/s)(13.0MiB/10007msec) 00:30:21.376 slat (nsec): min=6126, max=66236, avg=10793.14, stdev=7279.84 00:30:21.376 clat (usec): min=371, max=42823, avg=11993.29, stdev=18176.43 00:30:21.376 lat (usec): min=378, max=42879, avg=12004.08, stdev=18176.80 00:30:21.376 clat percentiles (usec): 00:30:21.376 | 1.00th=[ 400], 5.00th=[ 437], 10.00th=[ 465], 20.00th=[ 506], 00:30:21.376 | 30.00th=[ 537], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 865], 00:30:21.376 | 70.00th=[ 1287], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:30:21.376 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:21.376 | 99.99th=[42730] 00:30:21.376 bw ( KiB/s): min= 608, max= 5824, per=46.43%, avg=1329.60, stdev=1126.95, samples=20 00:30:21.376 iops : min= 152, max= 1456, avg=332.40, stdev=281.74, samples=20 00:30:21.376 lat (usec) : 500=18.45%, 750=38.88%, 1000=5.89% 00:30:21.376 lat (msec) : 2=8.17%, 4=0.60%, 50=28.00% 00:30:21.376 cpu : usr=91.67%, sys=7.68%, ctx=111, majf=0, minf=9 00:30:21.376 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.376 issued rwts: total=3328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.376 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:21.377 filename1: (groupid=0, jobs=1): err= 0: pid=97901: Wed May 15 10:10:58 2024 00:30:21.377 read: IOPS=383, BW=1533KiB/s (1570kB/s)(15.0MiB/10017msec) 00:30:21.377 slat (nsec): min=4065, max=70160, avg=9678.28, stdev=6343.59 00:30:21.377 clat (usec): min=361, max=42475, avg=10403.39, stdev=17324.50 00:30:21.377 lat (usec): min=368, max=42483, avg=10413.07, stdev=17324.80 00:30:21.377 clat percentiles (usec): 00:30:21.377 | 1.00th=[ 408], 5.00th=[ 441], 10.00th=[ 465], 20.00th=[ 498], 00:30:21.377 | 30.00th=[ 515], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 685], 00:30:21.377 | 70.00th=[ 1156], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:30:21.377 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42730], 00:30:21.377 | 99.99th=[42730] 00:30:21.377 bw ( KiB/s): min= 704, max= 5888, per=53.59%, avg=1534.40, stdev=1208.46, samples=20 00:30:21.377 iops : min= 176, max= 1472, avg=383.60, stdev=302.12, samples=20 00:30:21.377 lat (usec) : 500=21.90%, 750=40.60%, 1000=5.76% 00:30:21.377 lat (msec) : 2=7.37%, 4=0.21%, 50=24.17% 00:30:21.377 cpu : usr=92.18%, sys=7.22%, ctx=68, majf=0, minf=0 00:30:21.377 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:21.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.377 issued rwts: total=3840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.377 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:21.377 00:30:21.377 Run status group 0 (all jobs): 00:30:21.377 READ: bw=2862KiB/s (2931kB/s), 1330KiB/s-1533KiB/s (1362kB/s-1570kB/s), io=28.0MiB (29.4MB), run=10007-10017msec 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.635 00:30:21.635 real 0m11.453s 00:30:21.635 user 0m19.381s 00:30:21.635 sys 0m1.909s 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:21.635 ************************************ 00:30:21.635 END TEST fio_dif_1_multi_subsystems 00:30:21.635 ************************************ 00:30:21.635 10:10:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:21.635 10:10:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:21.635 10:10:58 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:21.635 10:10:58 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:21.635 10:10:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:21.635 ************************************ 00:30:21.635 START TEST fio_dif_rand_params 00:30:21.635 ************************************ 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.635 bdev_null0 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.635 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:21.636 [2024-05-15 10:10:58.974210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:21.636 { 00:30:21.636 "params": { 00:30:21.636 "name": "Nvme$subsystem", 00:30:21.636 "trtype": "$TEST_TRANSPORT", 00:30:21.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:21.636 "adrfam": "ipv4", 00:30:21.636 "trsvcid": "$NVMF_PORT", 00:30:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:21.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:21.636 "hdgst": ${hdgst:-false}, 00:30:21.636 "ddgst": ${ddgst:-false} 00:30:21.636 }, 00:30:21.636 "method": "bdev_nvme_attach_controller" 00:30:21.636 } 00:30:21.636 EOF 00:30:21.636 )") 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:21.636 10:10:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:21.636 "params": { 00:30:21.636 "name": "Nvme0", 00:30:21.636 "trtype": "tcp", 00:30:21.636 "traddr": "10.0.0.2", 00:30:21.636 "adrfam": "ipv4", 00:30:21.636 "trsvcid": "4420", 00:30:21.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.636 "hdgst": false, 00:30:21.636 "ddgst": false 00:30:21.636 }, 00:30:21.636 "method": "bdev_nvme_attach_controller" 00:30:21.636 }' 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:21.893 10:10:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:21.893 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:21.893 ... 00:30:21.893 fio-3.35 00:30:21.893 Starting 3 threads 00:30:28.450 00:30:28.450 filename0: (groupid=0, jobs=1): err= 0: pid=98056: Wed May 15 10:11:04 2024 00:30:28.450 read: IOPS=238, BW=29.8MiB/s (31.2MB/s)(149MiB/5008msec) 00:30:28.450 slat (nsec): min=5625, max=63477, avg=16772.69, stdev=8412.23 00:30:28.450 clat (usec): min=5807, max=56305, avg=12561.39, stdev=8072.91 00:30:28.450 lat (usec): min=5819, max=56319, avg=12578.16, stdev=8073.47 00:30:28.450 clat percentiles (usec): 00:30:28.450 | 1.00th=[ 6915], 5.00th=[ 8094], 10.00th=[ 9110], 20.00th=[10028], 00:30:28.450 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:30:28.450 | 70.00th=[11469], 80.00th=[11863], 90.00th=[14353], 95.00th=[16450], 00:30:28.450 | 99.00th=[52691], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:30:28.450 | 99.99th=[56361] 00:30:28.450 bw ( KiB/s): min=19200, max=36864, per=34.61%, avg=30489.60, stdev=5882.99, samples=10 00:30:28.450 iops : min= 150, max= 288, avg=238.20, stdev=45.96, samples=10 00:30:28.450 lat (msec) : 10=19.11%, 20=76.70%, 50=1.59%, 100=2.60% 00:30:28.450 cpu : usr=90.97%, sys=7.65%, ctx=56, majf=0, minf=0 00:30:28.450 IO depths : 1=3.9%, 2=96.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.450 issued rwts: total=1193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.450 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.450 filename0: (groupid=0, jobs=1): err= 0: pid=98057: Wed May 15 10:11:04 2024 00:30:28.450 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(138MiB/5006msec) 00:30:28.450 slat (nsec): min=6753, max=60241, avg=16228.94, stdev=6934.68 00:30:28.450 clat (usec): min=3944, max=57287, avg=13561.31, stdev=5180.28 00:30:28.450 lat (usec): min=3954, max=57296, avg=13577.54, stdev=5181.27 00:30:28.450 clat percentiles (usec): 00:30:28.450 | 1.00th=[ 4228], 5.00th=[ 4359], 10.00th=[ 6587], 20.00th=[ 9372], 00:30:28.450 | 30.00th=[12911], 40.00th=[13960], 50.00th=[14615], 60.00th=[15008], 00:30:28.450 | 70.00th=[15401], 80.00th=[15926], 90.00th=[17171], 95.00th=[19006], 00:30:28.450 | 99.00th=[24249], 99.50th=[45351], 99.90th=[56361], 99.95th=[57410], 00:30:28.450 | 99.99th=[57410] 00:30:28.450 bw ( KiB/s): min=22528, max=38400, per=32.05%, avg=28236.80, stdev=5386.22, samples=10 00:30:28.450 iops : min= 176, max= 300, avg=220.60, stdev=42.08, samples=10 00:30:28.450 lat (msec) : 4=0.18%, 10=23.71%, 20=73.03%, 50=2.81%, 100=0.27% 00:30:28.450 cpu : usr=91.17%, sys=7.41%, ctx=22, majf=0, minf=0 00:30:28.450 IO depths : 1=9.5%, 2=90.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.450 issued rwts: total=1105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.450 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.450 filename0: (groupid=0, jobs=1): err= 0: pid=98058: Wed May 15 10:11:04 2024 00:30:28.450 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(144MiB/5006msec) 00:30:28.450 slat (nsec): min=6855, max=48485, avg=15136.13, stdev=6241.95 00:30:28.450 clat (usec): min=4184, max=54528, avg=13048.38, stdev=7108.10 00:30:28.450 lat (usec): min=4197, max=54554, avg=13063.51, stdev=7108.39 00:30:28.450 clat percentiles (usec): 00:30:28.450 | 1.00th=[ 6456], 5.00th=[ 7767], 10.00th=[ 8717], 20.00th=[10683], 00:30:28.450 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:30:28.450 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14877], 95.00th=[16319], 00:30:28.450 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[54789], 00:30:28.450 | 99.99th=[54789] 00:30:28.450 bw ( KiB/s): min=21504, max=32256, per=32.90%, avg=28984.89, stdev=4308.07, samples=9 00:30:28.450 iops : min= 168, max= 252, avg=226.44, stdev=33.66, samples=9 00:30:28.450 lat (msec) : 10=14.53%, 20=82.07%, 50=1.57%, 100=1.83% 00:30:28.450 cpu : usr=91.33%, sys=7.39%, ctx=11, majf=0, minf=0 00:30:28.450 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:28.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.450 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.450 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:28.450 00:30:28.450 Run status group 0 (all jobs): 00:30:28.450 READ: bw=86.0MiB/s (90.2MB/s), 27.6MiB/s-29.8MiB/s (28.9MB/s-31.2MB/s), io=431MiB (452MB), run=5006-5008msec 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.450 bdev_null0 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.450 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 [2024-05-15 10:11:05.236122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 bdev_null1 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 bdev_null2 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.451 { 00:30:28.451 "params": { 00:30:28.451 "name": "Nvme$subsystem", 00:30:28.451 "trtype": "$TEST_TRANSPORT", 00:30:28.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.451 "adrfam": "ipv4", 00:30:28.451 "trsvcid": "$NVMF_PORT", 00:30:28.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.451 "hdgst": ${hdgst:-false}, 00:30:28.451 "ddgst": ${ddgst:-false} 00:30:28.451 }, 00:30:28.451 "method": "bdev_nvme_attach_controller" 00:30:28.451 } 00:30:28.451 EOF 00:30:28.451 )") 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.451 { 00:30:28.451 "params": { 00:30:28.451 "name": "Nvme$subsystem", 00:30:28.451 "trtype": "$TEST_TRANSPORT", 00:30:28.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.451 "adrfam": "ipv4", 00:30:28.451 "trsvcid": "$NVMF_PORT", 00:30:28.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.451 "hdgst": ${hdgst:-false}, 00:30:28.451 "ddgst": ${ddgst:-false} 00:30:28.451 }, 00:30:28.451 "method": "bdev_nvme_attach_controller" 00:30:28.451 } 00:30:28.451 EOF 00:30:28.451 )") 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:28.451 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:28.451 { 00:30:28.451 "params": { 00:30:28.451 "name": "Nvme$subsystem", 00:30:28.451 "trtype": "$TEST_TRANSPORT", 00:30:28.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.451 "adrfam": "ipv4", 00:30:28.451 "trsvcid": "$NVMF_PORT", 00:30:28.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.451 "hdgst": ${hdgst:-false}, 00:30:28.452 "ddgst": ${ddgst:-false} 00:30:28.452 }, 00:30:28.452 "method": "bdev_nvme_attach_controller" 00:30:28.452 } 00:30:28.452 EOF 00:30:28.452 )") 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:28.452 "params": { 00:30:28.452 "name": "Nvme0", 00:30:28.452 "trtype": "tcp", 00:30:28.452 "traddr": "10.0.0.2", 00:30:28.452 "adrfam": "ipv4", 00:30:28.452 "trsvcid": "4420", 00:30:28.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:28.452 "hdgst": false, 00:30:28.452 "ddgst": false 00:30:28.452 }, 00:30:28.452 "method": "bdev_nvme_attach_controller" 00:30:28.452 },{ 00:30:28.452 "params": { 00:30:28.452 "name": "Nvme1", 00:30:28.452 "trtype": "tcp", 00:30:28.452 "traddr": "10.0.0.2", 00:30:28.452 "adrfam": "ipv4", 00:30:28.452 "trsvcid": "4420", 00:30:28.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:28.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:28.452 "hdgst": false, 00:30:28.452 "ddgst": false 00:30:28.452 }, 00:30:28.452 "method": "bdev_nvme_attach_controller" 00:30:28.452 },{ 00:30:28.452 "params": { 00:30:28.452 "name": "Nvme2", 00:30:28.452 "trtype": "tcp", 00:30:28.452 "traddr": "10.0.0.2", 00:30:28.452 "adrfam": "ipv4", 00:30:28.452 "trsvcid": "4420", 00:30:28.452 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:28.452 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:28.452 "hdgst": false, 00:30:28.452 "ddgst": false 00:30:28.452 }, 00:30:28.452 "method": "bdev_nvme_attach_controller" 00:30:28.452 }' 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:28.452 10:11:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:28.452 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.452 ... 00:30:28.452 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.452 ... 00:30:28.452 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:28.452 ... 00:30:28.452 fio-3.35 00:30:28.452 Starting 24 threads 00:30:40.701 00:30:40.701 filename0: (groupid=0, jobs=1): err= 0: pid=98160: Wed May 15 10:11:16 2024 00:30:40.701 read: IOPS=221, BW=888KiB/s (909kB/s)(8900KiB/10025msec) 00:30:40.701 slat (usec): min=7, max=18028, avg=19.52, stdev=382.00 00:30:40.701 clat (msec): min=25, max=141, avg=71.81, stdev=21.47 00:30:40.701 lat (msec): min=25, max=141, avg=71.83, stdev=21.49 00:30:40.701 clat percentiles (msec): 00:30:40.701 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 54], 00:30:40.701 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 74], 60.00th=[ 81], 00:30:40.701 | 70.00th=[ 82], 80.00th=[ 84], 90.00th=[ 107], 95.00th=[ 109], 00:30:40.701 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 142], 00:30:40.701 | 99.99th=[ 142] 00:30:40.701 bw ( KiB/s): min= 768, max= 1104, per=4.11%, avg=886.00, stdev=100.35, samples=20 00:30:40.701 iops : min= 192, max= 276, avg=221.50, stdev=25.09, samples=20 00:30:40.701 lat (msec) : 50=11.06%, 100=76.31%, 250=12.63% 00:30:40.701 cpu : usr=31.61%, sys=1.56%, ctx=408, majf=0, minf=9 00:30:40.701 IO depths : 1=2.2%, 2=4.5%, 4=13.3%, 8=69.5%, 16=10.4%, 32=0.0%, >=64=0.0% 00:30:40.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.701 complete : 0=0.0%, 4=90.9%, 8=3.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.701 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.701 filename0: (groupid=0, jobs=1): err= 0: pid=98161: Wed May 15 10:11:16 2024 00:30:40.701 read: IOPS=275, BW=1102KiB/s (1129kB/s)(10.8MiB/10051msec) 00:30:40.701 slat (nsec): min=6745, max=83456, avg=10862.43, stdev=4395.01 00:30:40.701 clat (usec): min=1458, max=122078, avg=57978.89, stdev=25778.65 00:30:40.701 lat (usec): min=1465, max=122095, avg=57989.75, stdev=25778.77 00:30:40.701 clat percentiles (usec): 00:30:40.701 | 1.00th=[ 1516], 5.00th=[ 1614], 10.00th=[ 27132], 20.00th=[ 42206], 00:30:40.701 | 30.00th=[ 48497], 40.00th=[ 54264], 50.00th=[ 59507], 60.00th=[ 62653], 00:30:40.701 | 70.00th=[ 70779], 80.00th=[ 79168], 90.00th=[ 87557], 95.00th=[101188], 00:30:40.701 | 99.00th=[111674], 99.50th=[121111], 99.90th=[122160], 99.95th=[122160], 00:30:40.701 | 99.99th=[122160] 00:30:40.701 bw ( KiB/s): min= 720, max= 3200, per=5.11%, avg=1101.60, stdev=514.79, samples=20 00:30:40.702 iops : min= 180, max= 800, avg=275.40, stdev=128.70, samples=20 00:30:40.702 lat (msec) : 2=7.51%, 4=0.58%, 10=1.16%, 50=24.37%, 100=61.34% 00:30:40.702 lat (msec) : 250=5.05% 00:30:40.702 cpu : usr=41.55%, sys=2.22%, ctx=589, majf=0, minf=0 00:30:40.702 IO depths : 1=1.7%, 2=3.3%, 4=11.1%, 8=72.6%, 16=11.3%, 32=0.0%, >=64=0.0% 00:30:40.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 issued rwts: total=2770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.702 filename0: (groupid=0, jobs=1): err= 0: pid=98162: Wed May 15 10:11:16 2024 00:30:40.702 read: IOPS=217, BW=869KiB/s (890kB/s)(8716KiB/10025msec) 00:30:40.702 slat (usec): min=7, max=18032, avg=19.51, stdev=386.09 00:30:40.702 clat (msec): min=29, max=170, avg=73.39, stdev=21.48 00:30:40.702 lat (msec): min=29, max=170, avg=73.41, stdev=21.49 00:30:40.702 clat percentiles (msec): 00:30:40.702 | 1.00th=[ 31], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 55], 00:30:40.702 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 79], 00:30:40.702 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 111], 00:30:40.702 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 171], 99.95th=[ 171], 00:30:40.702 | 99.99th=[ 171] 00:30:40.702 bw ( KiB/s): min= 624, max= 1152, per=4.03%, avg=869.20, stdev=112.73, samples=20 00:30:40.702 iops : min= 156, max= 288, avg=217.30, stdev=28.18, samples=20 00:30:40.702 lat (msec) : 50=10.37%, 100=77.79%, 250=11.84% 00:30:40.702 cpu : usr=40.84%, sys=2.08%, ctx=429, majf=0, minf=9 00:30:40.702 IO depths : 1=3.5%, 2=7.4%, 4=17.4%, 8=62.4%, 16=9.3%, 32=0.0%, >=64=0.0% 00:30:40.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 complete : 0=0.0%, 4=92.1%, 8=2.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 issued rwts: total=2179,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.702 filename0: (groupid=0, jobs=1): err= 0: pid=98163: Wed May 15 10:11:16 2024 00:30:40.702 read: IOPS=244, BW=979KiB/s (1002kB/s)(9860KiB/10074msec) 00:30:40.702 slat (usec): min=4, max=5028, avg=13.93, stdev=117.94 00:30:40.702 clat (msec): min=8, max=130, avg=65.19, stdev=21.41 00:30:40.702 lat (msec): min=8, max=130, avg=65.21, stdev=21.41 00:30:40.702 clat percentiles (msec): 00:30:40.702 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 45], 00:30:40.702 | 30.00th=[ 52], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 71], 00:30:40.702 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 105], 00:30:40.702 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 131], 99.95th=[ 131], 00:30:40.702 | 99.99th=[ 131] 00:30:40.702 bw ( KiB/s): min= 560, max= 1336, per=4.55%, avg=979.60, stdev=161.06, samples=20 00:30:40.702 iops : min= 140, max= 334, avg=244.90, stdev=40.26, samples=20 00:30:40.702 lat (msec) : 10=0.28%, 20=0.41%, 50=27.14%, 100=66.57%, 250=5.60% 00:30:40.702 cpu : usr=39.23%, sys=1.78%, ctx=752, majf=0, minf=9 00:30:40.702 IO depths : 1=0.2%, 2=0.6%, 4=6.0%, 8=80.1%, 16=13.1%, 32=0.0%, >=64=0.0% 00:30:40.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 complete : 0=0.0%, 4=89.1%, 8=6.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 issued rwts: total=2465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.702 filename0: (groupid=0, jobs=1): err= 0: pid=98164: Wed May 15 10:11:16 2024 00:30:40.702 read: IOPS=222, BW=890KiB/s (911kB/s)(8936KiB/10042msec) 00:30:40.702 slat (usec): min=3, max=18036, avg=19.66, stdev=381.41 00:30:40.702 clat (msec): min=14, max=150, avg=71.70, stdev=23.84 00:30:40.702 lat (msec): min=14, max=150, avg=71.72, stdev=23.84 00:30:40.702 clat percentiles (msec): 00:30:40.702 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 54], 00:30:40.702 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 79], 00:30:40.702 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 115], 00:30:40.702 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 150], 00:30:40.702 | 99.99th=[ 150] 00:30:40.702 bw ( KiB/s): min= 640, max= 1264, per=4.12%, avg=887.20, stdev=181.87, samples=20 00:30:40.702 iops : min= 160, max= 316, avg=221.80, stdev=45.47, samples=20 00:30:40.702 lat (msec) : 20=0.72%, 50=12.22%, 100=71.93%, 250=15.13% 00:30:40.702 cpu : usr=31.43%, sys=1.60%, ctx=487, majf=0, minf=9 00:30:40.702 IO depths : 1=1.0%, 2=2.7%, 4=11.1%, 8=73.1%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:40.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 issued rwts: total=2234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.702 filename0: (groupid=0, jobs=1): err= 0: pid=98165: Wed May 15 10:11:16 2024 00:30:40.702 read: IOPS=223, BW=893KiB/s (914kB/s)(8952KiB/10025msec) 00:30:40.702 slat (nsec): min=4976, max=65401, avg=12064.09, stdev=6264.15 00:30:40.702 clat (msec): min=25, max=148, avg=71.51, stdev=22.19 00:30:40.702 lat (msec): min=25, max=148, avg=71.52, stdev=22.19 00:30:40.702 clat percentiles (msec): 00:30:40.702 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 49], 20.00th=[ 54], 00:30:40.702 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 80], 00:30:40.702 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 107], 95.00th=[ 110], 00:30:40.702 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 150], 99.95th=[ 150], 00:30:40.702 | 99.99th=[ 150] 00:30:40.702 bw ( KiB/s): min= 720, max= 1280, per=4.14%, avg=892.05, stdev=158.64, samples=20 00:30:40.702 iops : min= 180, max= 320, avg=223.00, stdev=39.66, samples=20 00:30:40.702 lat (msec) : 50=13.72%, 100=73.55%, 250=12.73% 00:30:40.702 cpu : usr=31.94%, sys=1.60%, ctx=392, majf=0, minf=9 00:30:40.702 IO depths : 1=0.4%, 2=1.1%, 4=6.9%, 8=78.8%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:40.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 complete : 0=0.0%, 4=89.4%, 8=5.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 issued rwts: total=2238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.702 filename0: (groupid=0, jobs=1): err= 0: pid=98166: Wed May 15 10:11:16 2024 00:30:40.702 read: IOPS=253, BW=1015KiB/s (1039kB/s)(9.95MiB/10046msec) 00:30:40.702 slat (usec): min=4, max=299, avg=11.05, stdev= 7.19 00:30:40.702 clat (msec): min=6, max=131, avg=62.92, stdev=21.92 00:30:40.702 lat (msec): min=6, max=131, avg=62.93, stdev=21.92 00:30:40.702 clat percentiles (msec): 00:30:40.702 | 1.00th=[ 20], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 47], 00:30:40.702 | 30.00th=[ 51], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 64], 00:30:40.702 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 95], 95.00th=[ 106], 00:30:40.702 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:30:40.702 | 99.99th=[ 132] 00:30:40.702 bw ( KiB/s): min= 640, max= 1328, per=4.71%, avg=1015.20, stdev=165.10, samples=20 00:30:40.702 iops : min= 160, max= 332, avg=253.80, stdev=41.27, samples=20 00:30:40.702 lat (msec) : 10=0.63%, 20=0.63%, 50=28.26%, 100=62.48%, 250=8.01% 00:30:40.702 cpu : usr=44.23%, sys=1.99%, ctx=457, majf=0, minf=9 00:30:40.702 IO depths : 1=0.5%, 2=0.9%, 4=7.6%, 8=78.2%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:40.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 complete : 0=0.0%, 4=89.5%, 8=5.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 issued rwts: total=2548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.702 filename0: (groupid=0, jobs=1): err= 0: pid=98167: Wed May 15 10:11:16 2024 00:30:40.702 read: IOPS=246, BW=987KiB/s (1011kB/s)(9920KiB/10049msec) 00:30:40.702 slat (nsec): min=7078, max=55659, avg=10970.58, stdev=4494.49 00:30:40.702 clat (msec): min=16, max=143, avg=64.54, stdev=22.91 00:30:40.702 lat (msec): min=16, max=143, avg=64.55, stdev=22.91 00:30:40.702 clat percentiles (msec): 00:30:40.702 | 1.00th=[ 27], 5.00th=[ 33], 10.00th=[ 40], 20.00th=[ 46], 00:30:40.702 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 69], 00:30:40.702 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 100], 95.00th=[ 108], 00:30:40.702 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:30:40.702 | 99.99th=[ 144] 00:30:40.702 bw ( KiB/s): min= 640, max= 1344, per=4.60%, avg=990.40, stdev=169.16, samples=20 00:30:40.702 iops : min= 160, max= 336, avg=247.60, stdev=42.29, samples=20 00:30:40.702 lat (msec) : 20=0.69%, 50=28.55%, 100=61.57%, 250=9.19% 00:30:40.702 cpu : usr=38.71%, sys=1.50%, ctx=733, majf=0, minf=9 00:30:40.702 IO depths : 1=1.0%, 2=2.2%, 4=8.7%, 8=75.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:40.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.702 issued rwts: total=2480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.702 filename1: (groupid=0, jobs=1): err= 0: pid=98168: Wed May 15 10:11:16 2024 00:30:40.702 read: IOPS=207, BW=832KiB/s (851kB/s)(8340KiB/10030msec) 00:30:40.702 slat (usec): min=7, max=18028, avg=20.39, stdev=394.63 00:30:40.702 clat (msec): min=31, max=161, avg=76.69, stdev=22.32 00:30:40.702 lat (msec): min=31, max=161, avg=76.71, stdev=22.32 00:30:40.702 clat percentiles (msec): 00:30:40.702 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 55], 00:30:40.702 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 79], 60.00th=[ 82], 00:30:40.702 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 118], 00:30:40.702 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 163], 99.95th=[ 163], 00:30:40.702 | 99.99th=[ 163] 00:30:40.702 bw ( KiB/s): min= 640, max= 1000, per=3.84%, avg=827.60, stdev=102.83, samples=20 00:30:40.702 iops : min= 160, max= 250, avg=206.90, stdev=25.71, samples=20 00:30:40.702 lat (msec) : 50=8.35%, 100=74.63%, 250=17.03% 00:30:40.703 cpu : usr=31.64%, sys=1.49%, ctx=478, majf=0, minf=9 00:30:40.703 IO depths : 1=1.6%, 2=3.3%, 4=11.8%, 8=72.2%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:40.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 complete : 0=0.0%, 4=90.6%, 8=4.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.703 filename1: (groupid=0, jobs=1): err= 0: pid=98169: Wed May 15 10:11:16 2024 00:30:40.703 read: IOPS=238, BW=952KiB/s (975kB/s)(9548KiB/10029msec) 00:30:40.703 slat (usec): min=7, max=13024, avg=20.40, stdev=324.06 00:30:40.703 clat (msec): min=22, max=127, avg=67.05, stdev=22.19 00:30:40.703 lat (msec): min=22, max=127, avg=67.07, stdev=22.19 00:30:40.703 clat percentiles (msec): 00:30:40.703 | 1.00th=[ 30], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 49], 00:30:40.703 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 71], 00:30:40.703 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 102], 95.00th=[ 109], 00:30:40.703 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 128], 99.95th=[ 128], 00:30:40.703 | 99.99th=[ 128] 00:30:40.703 bw ( KiB/s): min= 720, max= 1200, per=4.40%, avg=948.40, stdev=143.38, samples=20 00:30:40.703 iops : min= 180, max= 300, avg=237.10, stdev=35.84, samples=20 00:30:40.703 lat (msec) : 50=24.63%, 100=63.72%, 250=11.65% 00:30:40.703 cpu : usr=42.36%, sys=2.15%, ctx=514, majf=0, minf=9 00:30:40.703 IO depths : 1=1.5%, 2=3.2%, 4=10.4%, 8=73.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:40.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 issued rwts: total=2387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.703 filename1: (groupid=0, jobs=1): err= 0: pid=98170: Wed May 15 10:11:16 2024 00:30:40.703 read: IOPS=217, BW=870KiB/s (891kB/s)(8724KiB/10022msec) 00:30:40.703 slat (usec): min=3, max=11288, avg=25.48, stdev=372.32 00:30:40.703 clat (msec): min=25, max=151, avg=73.28, stdev=23.32 00:30:40.703 lat (msec): min=25, max=151, avg=73.30, stdev=23.32 00:30:40.703 clat percentiles (msec): 00:30:40.703 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 46], 20.00th=[ 53], 00:30:40.703 | 30.00th=[ 55], 40.00th=[ 63], 50.00th=[ 74], 60.00th=[ 81], 00:30:40.703 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 110], 00:30:40.703 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 153], 99.95th=[ 153], 00:30:40.703 | 99.99th=[ 153] 00:30:40.703 bw ( KiB/s): min= 768, max= 1120, per=4.03%, avg=869.00, stdev=95.53, samples=20 00:30:40.703 iops : min= 192, max= 280, avg=217.20, stdev=23.85, samples=20 00:30:40.703 lat (msec) : 50=15.41%, 100=70.56%, 250=14.03% 00:30:40.703 cpu : usr=35.26%, sys=1.62%, ctx=478, majf=0, minf=9 00:30:40.703 IO depths : 1=1.1%, 2=2.2%, 4=9.4%, 8=75.4%, 16=12.0%, 32=0.0%, >=64=0.0% 00:30:40.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 complete : 0=0.0%, 4=89.9%, 8=5.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.703 filename1: (groupid=0, jobs=1): err= 0: pid=98171: Wed May 15 10:11:16 2024 00:30:40.703 read: IOPS=238, BW=955KiB/s (978kB/s)(9564KiB/10016msec) 00:30:40.703 slat (usec): min=5, max=10126, avg=15.21, stdev=206.91 00:30:40.703 clat (msec): min=25, max=134, avg=66.91, stdev=20.31 00:30:40.703 lat (msec): min=25, max=134, avg=66.93, stdev=20.31 00:30:40.703 clat percentiles (msec): 00:30:40.703 | 1.00th=[ 29], 5.00th=[ 39], 10.00th=[ 45], 20.00th=[ 51], 00:30:40.703 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 62], 60.00th=[ 74], 00:30:40.703 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 94], 95.00th=[ 108], 00:30:40.703 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 136], 99.95th=[ 136], 00:30:40.703 | 99.99th=[ 136] 00:30:40.703 bw ( KiB/s): min= 640, max= 1200, per=4.41%, avg=950.00, stdev=139.75, samples=20 00:30:40.703 iops : min= 160, max= 300, avg=237.50, stdev=34.94, samples=20 00:30:40.703 lat (msec) : 50=20.83%, 100=69.85%, 250=9.33% 00:30:40.703 cpu : usr=36.04%, sys=1.85%, ctx=538, majf=0, minf=9 00:30:40.703 IO depths : 1=1.1%, 2=2.8%, 4=11.4%, 8=73.1%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:40.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 complete : 0=0.0%, 4=90.4%, 8=4.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 issued rwts: total=2391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.703 filename1: (groupid=0, jobs=1): err= 0: pid=98172: Wed May 15 10:11:16 2024 00:30:40.703 read: IOPS=205, BW=823KiB/s (843kB/s)(8236KiB/10010msec) 00:30:40.703 slat (nsec): min=3708, max=69598, avg=11521.03, stdev=5705.10 00:30:40.703 clat (msec): min=20, max=166, avg=77.69, stdev=23.98 00:30:40.703 lat (msec): min=20, max=166, avg=77.71, stdev=23.98 00:30:40.703 clat percentiles (msec): 00:30:40.703 | 1.00th=[ 30], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 55], 00:30:40.703 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 79], 60.00th=[ 82], 00:30:40.703 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 118], 00:30:40.703 | 99.00th=[ 138], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 167], 00:30:40.703 | 99.99th=[ 167] 00:30:40.703 bw ( KiB/s): min= 640, max= 1152, per=3.77%, avg=813.11, stdev=148.10, samples=19 00:30:40.703 iops : min= 160, max= 288, avg=203.26, stdev=37.04, samples=19 00:30:40.703 lat (msec) : 50=8.60%, 100=71.83%, 250=19.57% 00:30:40.703 cpu : usr=38.08%, sys=1.54%, ctx=460, majf=0, minf=9 00:30:40.703 IO depths : 1=1.6%, 2=3.5%, 4=12.3%, 8=71.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:40.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 complete : 0=0.0%, 4=90.7%, 8=4.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.703 filename1: (groupid=0, jobs=1): err= 0: pid=98173: Wed May 15 10:11:16 2024 00:30:40.703 read: IOPS=214, BW=856KiB/s (877kB/s)(8572KiB/10011msec) 00:30:40.703 slat (usec): min=5, max=18033, avg=28.40, stdev=550.30 00:30:40.703 clat (msec): min=30, max=138, avg=74.52, stdev=20.94 00:30:40.703 lat (msec): min=30, max=138, avg=74.55, stdev=20.94 00:30:40.703 clat percentiles (msec): 00:30:40.703 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 54], 00:30:40.703 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 77], 60.00th=[ 82], 00:30:40.703 | 70.00th=[ 82], 80.00th=[ 89], 90.00th=[ 108], 95.00th=[ 109], 00:30:40.703 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 138], 99.95th=[ 138], 00:30:40.703 | 99.99th=[ 138] 00:30:40.703 bw ( KiB/s): min= 720, max= 976, per=3.95%, avg=850.80, stdev=75.64, samples=20 00:30:40.703 iops : min= 180, max= 244, avg=212.70, stdev=18.91, samples=20 00:30:40.703 lat (msec) : 50=7.14%, 100=77.93%, 250=14.93% 00:30:40.703 cpu : usr=31.57%, sys=1.55%, ctx=409, majf=0, minf=9 00:30:40.703 IO depths : 1=1.0%, 2=2.5%, 4=9.6%, 8=74.9%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:40.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 complete : 0=0.0%, 4=90.1%, 8=4.8%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 issued rwts: total=2143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.703 filename1: (groupid=0, jobs=1): err= 0: pid=98174: Wed May 15 10:11:16 2024 00:30:40.703 read: IOPS=218, BW=874KiB/s (895kB/s)(8756KiB/10017msec) 00:30:40.703 slat (nsec): min=7456, max=62253, avg=11715.90, stdev=5572.90 00:30:40.703 clat (msec): min=18, max=143, avg=73.15, stdev=22.85 00:30:40.703 lat (msec): min=18, max=143, avg=73.16, stdev=22.85 00:30:40.703 clat percentiles (msec): 00:30:40.703 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 55], 00:30:40.703 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 73], 60.00th=[ 80], 00:30:40.703 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 111], 00:30:40.703 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:30:40.703 | 99.99th=[ 144] 00:30:40.703 bw ( KiB/s): min= 640, max= 1168, per=4.03%, avg=869.20, stdev=139.83, samples=20 00:30:40.703 iops : min= 160, max= 292, avg=217.30, stdev=34.96, samples=20 00:30:40.703 lat (msec) : 20=0.46%, 50=14.80%, 100=68.89%, 250=15.85% 00:30:40.703 cpu : usr=43.66%, sys=1.97%, ctx=798, majf=0, minf=9 00:30:40.703 IO depths : 1=0.4%, 2=0.8%, 4=5.8%, 8=79.3%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:40.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 complete : 0=0.0%, 4=89.2%, 8=6.7%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.703 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.703 filename1: (groupid=0, jobs=1): err= 0: pid=98175: Wed May 15 10:11:16 2024 00:30:40.703 read: IOPS=218, BW=874KiB/s (895kB/s)(8784KiB/10048msec) 00:30:40.703 slat (nsec): min=4966, max=69709, avg=11666.78, stdev=5559.01 00:30:40.703 clat (msec): min=15, max=138, avg=73.02, stdev=22.46 00:30:40.703 lat (msec): min=15, max=138, avg=73.03, stdev=22.46 00:30:40.703 clat percentiles (msec): 00:30:40.703 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 54], 00:30:40.703 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 74], 60.00th=[ 81], 00:30:40.703 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 108], 95.00th=[ 109], 00:30:40.703 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 140], 99.95th=[ 140], 00:30:40.703 | 99.99th=[ 140] 00:30:40.703 bw ( KiB/s): min= 640, max= 1072, per=4.06%, avg=874.45, stdev=117.06, samples=20 00:30:40.703 iops : min= 160, max= 268, avg=218.60, stdev=29.27, samples=20 00:30:40.703 lat (msec) : 20=0.73%, 50=8.11%, 100=75.82%, 250=15.35% 00:30:40.703 cpu : usr=31.62%, sys=1.52%, ctx=422, majf=0, minf=9 00:30:40.703 IO depths : 1=1.1%, 2=2.5%, 4=10.2%, 8=74.5%, 16=11.8%, 32=0.0%, >=64=0.0% 00:30:40.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 complete : 0=0.0%, 4=90.2%, 8=4.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.704 filename2: (groupid=0, jobs=1): err= 0: pid=98176: Wed May 15 10:11:16 2024 00:30:40.704 read: IOPS=216, BW=867KiB/s (888kB/s)(8676KiB/10005msec) 00:30:40.704 slat (nsec): min=3599, max=56013, avg=11470.26, stdev=5589.86 00:30:40.704 clat (msec): min=30, max=147, avg=73.71, stdev=19.20 00:30:40.704 lat (msec): min=30, max=147, avg=73.72, stdev=19.20 00:30:40.704 clat percentiles (msec): 00:30:40.704 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 57], 00:30:40.704 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 78], 00:30:40.704 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 101], 95.00th=[ 107], 00:30:40.704 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 148], 00:30:40.704 | 99.99th=[ 148] 00:30:40.704 bw ( KiB/s): min= 736, max= 1200, per=3.99%, avg=859.37, stdev=107.01, samples=19 00:30:40.704 iops : min= 184, max= 300, avg=214.84, stdev=26.75, samples=19 00:30:40.704 lat (msec) : 50=7.75%, 100=81.51%, 250=10.74% 00:30:40.704 cpu : usr=40.66%, sys=1.90%, ctx=561, majf=0, minf=9 00:30:40.704 IO depths : 1=2.8%, 2=5.8%, 4=16.0%, 8=65.5%, 16=9.9%, 32=0.0%, >=64=0.0% 00:30:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 issued rwts: total=2169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.704 filename2: (groupid=0, jobs=1): err= 0: pid=98177: Wed May 15 10:11:16 2024 00:30:40.704 read: IOPS=198, BW=795KiB/s (814kB/s)(7960KiB/10011msec) 00:30:40.704 slat (usec): min=4, max=20025, avg=30.60, stdev=603.56 00:30:40.704 clat (msec): min=21, max=178, avg=80.27, stdev=23.07 00:30:40.704 lat (msec): min=21, max=178, avg=80.30, stdev=23.09 00:30:40.704 clat percentiles (msec): 00:30:40.704 | 1.00th=[ 33], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 59], 00:30:40.704 | 30.00th=[ 64], 40.00th=[ 79], 50.00th=[ 82], 60.00th=[ 83], 00:30:40.704 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 122], 00:30:40.704 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 180], 00:30:40.704 | 99.99th=[ 180] 00:30:40.704 bw ( KiB/s): min= 512, max= 944, per=3.66%, avg=789.65, stdev=103.80, samples=20 00:30:40.704 iops : min= 128, max= 236, avg=197.40, stdev=25.97, samples=20 00:30:40.704 lat (msec) : 50=3.97%, 100=77.54%, 250=18.49% 00:30:40.704 cpu : usr=31.64%, sys=1.51%, ctx=475, majf=0, minf=9 00:30:40.704 IO depths : 1=2.2%, 2=4.9%, 4=14.7%, 8=67.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:30:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 issued rwts: total=1990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.704 filename2: (groupid=0, jobs=1): err= 0: pid=98178: Wed May 15 10:11:16 2024 00:30:40.704 read: IOPS=216, BW=865KiB/s (886kB/s)(8656KiB/10002msec) 00:30:40.704 slat (usec): min=5, max=9008, avg=16.89, stdev=193.50 00:30:40.704 clat (msec): min=3, max=145, avg=73.84, stdev=24.88 00:30:40.704 lat (msec): min=3, max=145, avg=73.86, stdev=24.88 00:30:40.704 clat percentiles (msec): 00:30:40.704 | 1.00th=[ 27], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 53], 00:30:40.704 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 74], 60.00th=[ 81], 00:30:40.704 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 113], 00:30:40.704 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:30:40.704 | 99.99th=[ 146] 00:30:40.704 bw ( KiB/s): min= 592, max= 1024, per=3.92%, avg=845.47, stdev=113.17, samples=19 00:30:40.704 iops : min= 148, max= 256, avg=211.37, stdev=28.29, samples=19 00:30:40.704 lat (msec) : 4=0.28%, 50=15.25%, 100=66.13%, 250=18.35% 00:30:40.704 cpu : usr=32.29%, sys=1.53%, ctx=393, majf=0, minf=9 00:30:40.704 IO depths : 1=1.1%, 2=2.6%, 4=10.4%, 8=74.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:30:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 complete : 0=0.0%, 4=90.1%, 8=4.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.704 filename2: (groupid=0, jobs=1): err= 0: pid=98179: Wed May 15 10:11:16 2024 00:30:40.704 read: IOPS=229, BW=916KiB/s (938kB/s)(9172KiB/10012msec) 00:30:40.704 slat (usec): min=3, max=13022, avg=22.83, stdev=381.02 00:30:40.704 clat (msec): min=25, max=160, avg=69.67, stdev=22.61 00:30:40.704 lat (msec): min=25, max=160, avg=69.70, stdev=22.61 00:30:40.704 clat percentiles (msec): 00:30:40.704 | 1.00th=[ 30], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 52], 00:30:40.704 | 30.00th=[ 54], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 75], 00:30:40.704 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 116], 00:30:40.704 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 161], 99.95th=[ 161], 00:30:40.704 | 99.99th=[ 161] 00:30:40.704 bw ( KiB/s): min= 688, max= 1200, per=4.25%, avg=915.60, stdev=143.43, samples=20 00:30:40.704 iops : min= 172, max= 300, avg=228.90, stdev=35.86, samples=20 00:30:40.704 lat (msec) : 50=17.01%, 100=73.53%, 250=9.46% 00:30:40.704 cpu : usr=40.18%, sys=1.81%, ctx=468, majf=0, minf=9 00:30:40.704 IO depths : 1=0.8%, 2=1.7%, 4=8.1%, 8=77.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:30:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 complete : 0=0.0%, 4=89.8%, 8=5.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.704 filename2: (groupid=0, jobs=1): err= 0: pid=98180: Wed May 15 10:11:16 2024 00:30:40.704 read: IOPS=204, BW=817KiB/s (836kB/s)(8176KiB/10011msec) 00:30:40.704 slat (usec): min=5, max=17781, avg=28.25, stdev=487.66 00:30:40.704 clat (msec): min=18, max=152, avg=78.21, stdev=21.13 00:30:40.704 lat (msec): min=18, max=152, avg=78.24, stdev=21.12 00:30:40.704 clat percentiles (msec): 00:30:40.704 | 1.00th=[ 31], 5.00th=[ 53], 10.00th=[ 58], 20.00th=[ 62], 00:30:40.704 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 80], 00:30:40.704 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 114], 00:30:40.704 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 153], 00:30:40.704 | 99.99th=[ 153] 00:30:40.704 bw ( KiB/s): min= 640, max= 896, per=3.74%, avg=806.74, stdev=84.38, samples=19 00:30:40.704 iops : min= 160, max= 224, avg=201.68, stdev=21.10, samples=19 00:30:40.704 lat (msec) : 20=0.29%, 50=3.42%, 100=82.05%, 250=14.24% 00:30:40.704 cpu : usr=39.62%, sys=1.99%, ctx=637, majf=0, minf=9 00:30:40.704 IO depths : 1=4.7%, 2=9.6%, 4=21.7%, 8=56.2%, 16=7.9%, 32=0.0%, >=64=0.0% 00:30:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.704 filename2: (groupid=0, jobs=1): err= 0: pid=98181: Wed May 15 10:11:16 2024 00:30:40.704 read: IOPS=241, BW=966KiB/s (990kB/s)(9732KiB/10071msec) 00:30:40.704 slat (usec): min=5, max=18031, avg=18.92, stdev=365.36 00:30:40.704 clat (msec): min=5, max=134, avg=66.12, stdev=22.76 00:30:40.704 lat (msec): min=5, max=134, avg=66.14, stdev=22.76 00:30:40.704 clat percentiles (msec): 00:30:40.704 | 1.00th=[ 7], 5.00th=[ 33], 10.00th=[ 41], 20.00th=[ 51], 00:30:40.704 | 30.00th=[ 54], 40.00th=[ 57], 50.00th=[ 60], 60.00th=[ 73], 00:30:40.704 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 104], 95.00th=[ 108], 00:30:40.704 | 99.00th=[ 115], 99.50th=[ 131], 99.90th=[ 134], 99.95th=[ 134], 00:30:40.704 | 99.99th=[ 134] 00:30:40.704 bw ( KiB/s): min= 640, max= 1536, per=4.49%, avg=966.80, stdev=186.40, samples=20 00:30:40.704 iops : min= 160, max= 384, avg=241.70, stdev=46.60, samples=20 00:30:40.704 lat (msec) : 10=1.32%, 20=0.66%, 50=17.02%, 100=70.24%, 250=10.77% 00:30:40.704 cpu : usr=38.31%, sys=1.67%, ctx=550, majf=0, minf=9 00:30:40.704 IO depths : 1=1.4%, 2=2.8%, 4=9.7%, 8=74.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 complete : 0=0.0%, 4=90.0%, 8=5.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.704 filename2: (groupid=0, jobs=1): err= 0: pid=98182: Wed May 15 10:11:16 2024 00:30:40.704 read: IOPS=225, BW=902KiB/s (924kB/s)(9024KiB/10004msec) 00:30:40.704 slat (usec): min=7, max=20029, avg=37.68, stdev=660.47 00:30:40.704 clat (msec): min=25, max=177, avg=70.65, stdev=22.81 00:30:40.704 lat (msec): min=25, max=177, avg=70.69, stdev=22.82 00:30:40.704 clat percentiles (msec): 00:30:40.704 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 45], 20.00th=[ 52], 00:30:40.704 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 67], 60.00th=[ 78], 00:30:40.704 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 113], 00:30:40.704 | 99.00th=[ 128], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 178], 00:30:40.704 | 99.99th=[ 178] 00:30:40.704 bw ( KiB/s): min= 648, max= 1280, per=4.16%, avg=896.00, stdev=162.71, samples=19 00:30:40.704 iops : min= 162, max= 320, avg=224.00, stdev=40.68, samples=19 00:30:40.704 lat (msec) : 50=17.73%, 100=72.16%, 250=10.11% 00:30:40.704 cpu : usr=42.11%, sys=1.82%, ctx=561, majf=0, minf=9 00:30:40.704 IO depths : 1=1.9%, 2=4.2%, 4=11.3%, 8=71.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:30:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 complete : 0=0.0%, 4=90.8%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.704 issued rwts: total=2256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.704 filename2: (groupid=0, jobs=1): err= 0: pid=98183: Wed May 15 10:11:16 2024 00:30:40.704 read: IOPS=212, BW=848KiB/s (868kB/s)(8492KiB/10013msec) 00:30:40.704 slat (usec): min=4, max=18034, avg=21.26, stdev=391.19 00:30:40.704 clat (msec): min=29, max=147, avg=75.22, stdev=21.94 00:30:40.704 lat (msec): min=29, max=147, avg=75.24, stdev=21.94 00:30:40.704 clat percentiles (msec): 00:30:40.704 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 53], 20.00th=[ 55], 00:30:40.704 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 74], 60.00th=[ 81], 00:30:40.704 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 117], 00:30:40.704 | 99.00th=[ 142], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:30:40.705 | 99.99th=[ 148] 00:30:40.705 bw ( KiB/s): min= 640, max= 1024, per=3.93%, avg=846.70, stdev=97.78, samples=20 00:30:40.705 iops : min= 160, max= 256, avg=211.65, stdev=24.49, samples=20 00:30:40.705 lat (msec) : 50=4.80%, 100=81.35%, 250=13.85% 00:30:40.705 cpu : usr=32.18%, sys=1.75%, ctx=401, majf=0, minf=9 00:30:40.705 IO depths : 1=1.5%, 2=3.3%, 4=11.2%, 8=72.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:40.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.705 complete : 0=0.0%, 4=90.3%, 8=4.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.705 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:40.705 00:30:40.705 Run status group 0 (all jobs): 00:30:40.705 READ: bw=21.0MiB/s (22.1MB/s), 795KiB/s-1102KiB/s (814kB/s-1129kB/s), io=212MiB (222MB), run=10002-10074msec 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 bdev_null0 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 [2024-05-15 10:11:16.955015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 bdev_null1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.705 10:11:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.705 { 00:30:40.705 "params": { 00:30:40.705 "name": "Nvme$subsystem", 00:30:40.705 "trtype": "$TEST_TRANSPORT", 00:30:40.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.705 "adrfam": "ipv4", 00:30:40.705 "trsvcid": "$NVMF_PORT", 00:30:40.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.705 "hdgst": ${hdgst:-false}, 00:30:40.705 "ddgst": ${ddgst:-false} 00:30:40.705 }, 00:30:40.705 "method": "bdev_nvme_attach_controller" 00:30:40.705 } 00:30:40.705 EOF 00:30:40.705 )") 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:30:40.705 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.706 { 00:30:40.706 "params": { 00:30:40.706 "name": "Nvme$subsystem", 00:30:40.706 "trtype": "$TEST_TRANSPORT", 00:30:40.706 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.706 "adrfam": "ipv4", 00:30:40.706 "trsvcid": "$NVMF_PORT", 00:30:40.706 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.706 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.706 "hdgst": ${hdgst:-false}, 00:30:40.706 "ddgst": ${ddgst:-false} 00:30:40.706 }, 00:30:40.706 "method": "bdev_nvme_attach_controller" 00:30:40.706 } 00:30:40.706 EOF 00:30:40.706 )") 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:40.706 "params": { 00:30:40.706 "name": "Nvme0", 00:30:40.706 "trtype": "tcp", 00:30:40.706 "traddr": "10.0.0.2", 00:30:40.706 "adrfam": "ipv4", 00:30:40.706 "trsvcid": "4420", 00:30:40.706 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.706 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:40.706 "hdgst": false, 00:30:40.706 "ddgst": false 00:30:40.706 }, 00:30:40.706 "method": "bdev_nvme_attach_controller" 00:30:40.706 },{ 00:30:40.706 "params": { 00:30:40.706 "name": "Nvme1", 00:30:40.706 "trtype": "tcp", 00:30:40.706 "traddr": "10.0.0.2", 00:30:40.706 "adrfam": "ipv4", 00:30:40.706 "trsvcid": "4420", 00:30:40.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:40.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:40.706 "hdgst": false, 00:30:40.706 "ddgst": false 00:30:40.706 }, 00:30:40.706 "method": "bdev_nvme_attach_controller" 00:30:40.706 }' 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:40.706 10:11:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.706 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:40.706 ... 00:30:40.706 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:40.706 ... 00:30:40.706 fio-3.35 00:30:40.706 Starting 4 threads 00:30:46.154 00:30:46.154 filename0: (groupid=0, jobs=1): err= 0: pid=98304: Wed May 15 10:11:22 2024 00:30:46.154 read: IOPS=1755, BW=13.7MiB/s (14.4MB/s)(68.6MiB/5003msec) 00:30:46.154 slat (nsec): min=4913, max=64218, avg=13950.83, stdev=4267.19 00:30:46.154 clat (usec): min=2336, max=6654, avg=4488.89, stdev=288.60 00:30:46.154 lat (usec): min=2351, max=6670, avg=4502.84, stdev=288.16 00:30:46.154 clat percentiles (usec): 00:30:46.154 | 1.00th=[ 3720], 5.00th=[ 4080], 10.00th=[ 4178], 20.00th=[ 4359], 00:30:46.154 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4490], 00:30:46.154 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4948], 00:30:46.154 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6194], 99.95th=[ 6194], 00:30:46.154 | 99.99th=[ 6652] 00:30:46.154 bw ( KiB/s): min=13184, max=14592, per=25.01%, avg=14065.78, stdev=407.01, samples=9 00:30:46.154 iops : min= 1648, max= 1824, avg=1758.22, stdev=50.88, samples=9 00:30:46.154 lat (msec) : 4=4.12%, 10=95.88% 00:30:46.154 cpu : usr=92.08%, sys=6.82%, ctx=1182, majf=0, minf=0 00:30:46.154 IO depths : 1=10.8%, 2=25.0%, 4=50.0%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.154 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.154 issued rwts: total=8784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.154 filename0: (groupid=0, jobs=1): err= 0: pid=98305: Wed May 15 10:11:22 2024 00:30:46.154 read: IOPS=1762, BW=13.8MiB/s (14.4MB/s)(68.9MiB/5001msec) 00:30:46.154 slat (nsec): min=7085, max=42631, avg=14331.14, stdev=4216.88 00:30:46.154 clat (usec): min=707, max=6851, avg=4471.47, stdev=372.89 00:30:46.154 lat (usec): min=722, max=6865, avg=4485.80, stdev=372.02 00:30:46.154 clat percentiles (usec): 00:30:46.154 | 1.00th=[ 3621], 5.00th=[ 4015], 10.00th=[ 4178], 20.00th=[ 4359], 00:30:46.154 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4490], 00:30:46.154 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4948], 00:30:46.154 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6194], 99.95th=[ 6194], 00:30:46.154 | 99.99th=[ 6849] 00:30:46.154 bw ( KiB/s): min=13184, max=15232, per=25.14%, avg=14136.89, stdev=547.23, samples=9 00:30:46.154 iops : min= 1648, max= 1904, avg=1767.11, stdev=68.40, samples=9 00:30:46.154 lat (usec) : 750=0.01% 00:30:46.154 lat (msec) : 2=0.53%, 4=4.32%, 10=95.13% 00:30:46.154 cpu : usr=91.80%, sys=7.22%, ctx=14, majf=0, minf=0 00:30:46.154 IO depths : 1=11.0%, 2=25.0%, 4=50.0%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.154 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.154 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.154 issued rwts: total=8816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.154 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.154 filename1: (groupid=0, jobs=1): err= 0: pid=98306: Wed May 15 10:11:22 2024 00:30:46.154 read: IOPS=1756, BW=13.7MiB/s (14.4MB/s)(68.6MiB/5002msec) 00:30:46.154 slat (nsec): min=7003, max=43722, avg=11445.91, stdev=3835.13 00:30:46.154 clat (usec): min=1354, max=7833, avg=4517.61, stdev=369.56 00:30:46.154 lat (usec): min=1364, max=7847, avg=4529.06, stdev=369.13 00:30:46.154 clat percentiles (usec): 00:30:46.154 | 1.00th=[ 3654], 5.00th=[ 3884], 10.00th=[ 4178], 20.00th=[ 4359], 00:30:46.154 | 30.00th=[ 4490], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:30:46.154 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5145], 00:30:46.154 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 7111], 99.95th=[ 7701], 00:30:46.154 | 99.99th=[ 7832] 00:30:46.154 bw ( KiB/s): min=13184, max=14621, per=25.02%, avg=14069.00, stdev=413.73, samples=9 00:30:46.154 iops : min= 1648, max= 1827, avg=1758.56, stdev=51.61, samples=9 00:30:46.154 lat (msec) : 2=0.07%, 4=6.61%, 10=93.32% 00:30:46.155 cpu : usr=92.28%, sys=6.54%, ctx=69, majf=0, minf=9 00:30:46.155 IO depths : 1=0.9%, 2=3.3%, 4=71.6%, 8=24.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.155 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.155 issued rwts: total=8784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.155 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.155 filename1: (groupid=0, jobs=1): err= 0: pid=98307: Wed May 15 10:11:22 2024 00:30:46.155 read: IOPS=1755, BW=13.7MiB/s (14.4MB/s)(68.6MiB/5001msec) 00:30:46.155 slat (nsec): min=6136, max=38442, avg=10023.81, stdev=3979.58 00:30:46.155 clat (usec): min=1595, max=8666, avg=4507.02, stdev=438.90 00:30:46.155 lat (usec): min=1603, max=8673, avg=4517.05, stdev=438.72 00:30:46.155 clat percentiles (usec): 00:30:46.155 | 1.00th=[ 3228], 5.00th=[ 3916], 10.00th=[ 4178], 20.00th=[ 4359], 00:30:46.155 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4490], 00:30:46.155 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 4948], 00:30:46.155 | 99.00th=[ 6259], 99.50th=[ 6980], 99.90th=[ 8029], 99.95th=[ 8291], 00:30:46.155 | 99.99th=[ 8717] 00:30:46.155 bw ( KiB/s): min=13184, max=14512, per=25.03%, avg=14071.11, stdev=411.99, samples=9 00:30:46.155 iops : min= 1648, max= 1814, avg=1758.89, stdev=51.50, samples=9 00:30:46.155 lat (msec) : 2=0.05%, 4=5.40%, 10=94.56% 00:30:46.155 cpu : usr=92.66%, sys=6.42%, ctx=78, majf=0, minf=9 00:30:46.155 IO depths : 1=10.0%, 2=24.9%, 4=50.1%, 8=15.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:46.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.155 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:46.155 issued rwts: total=8779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:46.155 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:46.155 00:30:46.155 Run status group 0 (all jobs): 00:30:46.155 READ: bw=54.9MiB/s (57.6MB/s), 13.7MiB/s-13.8MiB/s (14.4MB/s-14.4MB/s), io=275MiB (288MB), run=5001-5003msec 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.155 00:30:46.155 real 0m24.401s 00:30:46.155 user 2m4.313s 00:30:46.155 sys 0m7.840s 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:46.155 ************************************ 00:30:46.155 END TEST fio_dif_rand_params 00:30:46.155 ************************************ 00:30:46.155 10:11:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 10:11:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:46.155 10:11:23 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:46.155 10:11:23 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 ************************************ 00:30:46.155 START TEST fio_dif_digest 00:30:46.155 ************************************ 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 bdev_null0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:46.155 [2024-05-15 10:11:23.432056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:46.155 { 00:30:46.155 "params": { 00:30:46.155 "name": "Nvme$subsystem", 00:30:46.155 "trtype": "$TEST_TRANSPORT", 00:30:46.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.155 "adrfam": "ipv4", 00:30:46.155 "trsvcid": "$NVMF_PORT", 00:30:46.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.155 "hdgst": ${hdgst:-false}, 00:30:46.155 "ddgst": ${ddgst:-false} 00:30:46.155 }, 00:30:46.155 "method": "bdev_nvme_attach_controller" 00:30:46.155 } 00:30:46.155 EOF 00:30:46.155 )") 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:46.155 "params": { 00:30:46.155 "name": "Nvme0", 00:30:46.155 "trtype": "tcp", 00:30:46.155 "traddr": "10.0.0.2", 00:30:46.155 "adrfam": "ipv4", 00:30:46.155 "trsvcid": "4420", 00:30:46.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.155 "hdgst": true, 00:30:46.155 "ddgst": true 00:30:46.155 }, 00:30:46.155 "method": "bdev_nvme_attach_controller" 00:30:46.155 }' 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libclang_rt.asan 00:30:46.155 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:46.436 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib= 00:30:46.436 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n '' ]] 00:30:46.436 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:46.436 10:11:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.436 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:46.436 ... 00:30:46.436 fio-3.35 00:30:46.436 Starting 3 threads 00:30:58.694 00:30:58.694 filename0: (groupid=0, jobs=1): err= 0: pid=98413: Wed May 15 10:11:34 2024 00:30:58.694 read: IOPS=185, BW=23.2MiB/s (24.3MB/s)(232MiB/10004msec) 00:30:58.694 slat (usec): min=6, max=288, avg=12.85, stdev= 8.96 00:30:58.694 clat (usec): min=5505, max=20312, avg=16127.39, stdev=1822.74 00:30:58.694 lat (usec): min=5513, max=20329, avg=16140.24, stdev=1822.98 00:30:58.694 clat percentiles (usec): 00:30:58.694 | 1.00th=[ 9241], 5.00th=[10814], 10.00th=[15008], 20.00th=[15795], 00:30:58.694 | 30.00th=[16057], 40.00th=[16319], 50.00th=[16581], 60.00th=[16712], 00:30:58.694 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17957], 00:30:58.694 | 99.00th=[18220], 99.50th=[18482], 99.90th=[20317], 99.95th=[20317], 00:30:58.694 | 99.99th=[20317] 00:30:58.694 bw ( KiB/s): min=22272, max=26880, per=27.85%, avg=23646.32, stdev=1073.34, samples=19 00:30:58.694 iops : min= 174, max= 210, avg=184.74, stdev= 8.39, samples=19 00:30:58.694 lat (msec) : 10=2.10%, 20=97.74%, 50=0.16% 00:30:58.694 cpu : usr=90.68%, sys=7.77%, ctx=231, majf=0, minf=9 00:30:58.694 IO depths : 1=32.4%, 2=67.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.694 issued rwts: total=1857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.694 filename0: (groupid=0, jobs=1): err= 0: pid=98414: Wed May 15 10:11:34 2024 00:30:58.694 read: IOPS=232, BW=29.0MiB/s (30.4MB/s)(290MiB/10004msec) 00:30:58.694 slat (nsec): min=5146, max=37055, avg=14348.02, stdev=4151.94 00:30:58.694 clat (usec): min=6420, max=17316, avg=12909.15, stdev=1636.56 00:30:58.694 lat (usec): min=6432, max=17332, avg=12923.50, stdev=1636.71 00:30:58.694 clat percentiles (usec): 00:30:58.694 | 1.00th=[ 7373], 5.00th=[ 8717], 10.00th=[11338], 20.00th=[12125], 00:30:58.694 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:30:58.694 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14484], 95.00th=[14877], 00:30:58.694 | 99.00th=[15664], 99.50th=[16188], 99.90th=[16712], 99.95th=[16909], 00:30:58.694 | 99.99th=[17433] 00:30:58.694 bw ( KiB/s): min=27648, max=33280, per=34.98%, avg=29699.05, stdev=1354.57, samples=20 00:30:58.694 iops : min= 216, max= 260, avg=232.00, stdev=10.56, samples=20 00:30:58.694 lat (msec) : 10=6.68%, 20=93.32% 00:30:58.694 cpu : usr=90.88%, sys=7.85%, ctx=49, majf=0, minf=9 00:30:58.694 IO depths : 1=3.1%, 2=96.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.694 issued rwts: total=2321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.694 filename0: (groupid=0, jobs=1): err= 0: pid=98415: Wed May 15 10:11:34 2024 00:30:58.694 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(310MiB/10042msec) 00:30:58.694 slat (nsec): min=5069, max=42653, avg=13585.74, stdev=3996.63 00:30:58.694 clat (usec): min=8641, max=54489, avg=12100.41, stdev=4874.27 00:30:58.694 lat (usec): min=8656, max=54501, avg=12113.99, stdev=4874.35 00:30:58.694 clat percentiles (usec): 00:30:58.694 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:30:58.694 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:30:58.694 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12911], 00:30:58.694 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53740], 99.95th=[54264], 00:30:58.694 | 99.99th=[54264] 00:30:58.694 bw ( KiB/s): min=27136, max=34048, per=37.40%, avg=31756.80, stdev=2165.03, samples=20 00:30:58.694 iops : min= 212, max= 266, avg=248.10, stdev=16.91, samples=20 00:30:58.694 lat (msec) : 10=2.46%, 20=96.13%, 100=1.41% 00:30:58.694 cpu : usr=90.65%, sys=8.14%, ctx=64, majf=0, minf=0 00:30:58.694 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.694 issued rwts: total=2483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.694 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:58.694 00:30:58.694 Run status group 0 (all jobs): 00:30:58.694 READ: bw=82.9MiB/s (86.9MB/s), 23.2MiB/s-30.9MiB/s (24.3MB/s-32.4MB/s), io=833MiB (873MB), run=10004-10042msec 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.694 00:30:58.694 real 0m11.237s 00:30:58.694 user 0m28.045s 00:30:58.694 sys 0m2.752s 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:58.694 10:11:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:58.694 ************************************ 00:30:58.694 END TEST fio_dif_digest 00:30:58.694 ************************************ 00:30:58.694 10:11:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:58.694 10:11:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:58.694 rmmod nvme_tcp 00:30:58.694 rmmod nvme_fabrics 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97651 ']' 00:30:58.694 10:11:34 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97651 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 97651 ']' 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 97651 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 97651 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:58.694 killing process with pid 97651 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 97651' 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@966 -- # kill 97651 00:30:58.694 [2024-05-15 10:11:34.794912] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:58.694 10:11:34 nvmf_dif -- common/autotest_common.sh@971 -- # wait 97651 00:30:58.694 10:11:35 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:58.694 10:11:35 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:58.694 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:58.694 Waiting for block devices as requested 00:30:58.694 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:58.694 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:58.694 10:11:35 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:58.694 10:11:35 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:58.694 10:11:35 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:58.694 10:11:35 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:58.694 10:11:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.694 10:11:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:58.694 10:11:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.694 10:11:35 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:58.694 00:30:58.694 real 1m2.038s 00:30:58.694 user 3m48.989s 00:30:58.694 sys 0m21.444s 00:30:58.694 10:11:35 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:58.694 10:11:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:58.695 ************************************ 00:30:58.695 END TEST nvmf_dif 00:30:58.695 ************************************ 00:30:58.695 10:11:36 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:58.695 10:11:36 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:58.695 10:11:36 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:58.695 10:11:36 -- common/autotest_common.sh@10 -- # set +x 00:30:58.695 ************************************ 00:30:58.695 START TEST nvmf_abort_qd_sizes 00:30:58.695 ************************************ 00:30:58.695 10:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:58.953 * Looking for test storage... 00:30:58.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.953 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:58.954 Cannot find device "nvmf_tgt_br" 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:58.954 Cannot find device "nvmf_tgt_br2" 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:58.954 Cannot find device "nvmf_tgt_br" 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:58.954 Cannot find device "nvmf_tgt_br2" 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:58.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:58.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:58.954 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:59.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:30:59.212 00:30:59.212 --- 10.0.0.2 ping statistics --- 00:30:59.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.212 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:30:59.212 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:59.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:59.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:30:59.212 00:30:59.212 --- 10.0.0.3 ping statistics --- 00:30:59.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.213 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:30:59.213 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:59.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:30:59.213 00:30:59.213 --- 10.0.0.1 ping statistics --- 00:30:59.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.213 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:30:59.213 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.213 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:59.213 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:59.213 10:11:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:00.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:00.149 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:00.149 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99010 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99010 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 99010 ']' 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:00.407 10:11:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:00.407 [2024-05-15 10:11:37.630989] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:31:00.407 [2024-05-15 10:11:37.631131] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.407 [2024-05-15 10:11:37.783982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.666 [2024-05-15 10:11:37.960361] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.666 [2024-05-15 10:11:37.960438] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.666 [2024-05-15 10:11:37.960455] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.666 [2024-05-15 10:11:37.960469] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.666 [2024-05-15 10:11:37.960480] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.666 [2024-05-15 10:11:37.960670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.666 [2024-05-15 10:11:37.960752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.666 [2024-05-15 10:11:37.961379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:00.666 [2024-05-15 10:11:37.961393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:31:01.603 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:01.604 10:11:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:01.604 ************************************ 00:31:01.604 START TEST spdk_target_abort 00:31:01.604 ************************************ 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:01.604 spdk_targetn1 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:01.604 [2024-05-15 10:11:38.874975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:01.604 [2024-05-15 10:11:38.902893] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:01.604 [2024-05-15 10:11:38.903233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:01.604 10:11:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:04.889 Initializing NVMe Controllers 00:31:04.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:04.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:04.889 Initialization complete. Launching workers. 00:31:04.889 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12873, failed: 0 00:31:04.889 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 494, failed to submit 12379 00:31:04.889 success 272, unsuccess 222, failed 0 00:31:04.889 10:11:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:04.889 10:11:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:09.077 Initializing NVMe Controllers 00:31:09.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:09.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:09.078 Initialization complete. Launching workers. 00:31:09.078 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 2048, failed: 0 00:31:09.078 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 478, failed to submit 1570 00:31:09.078 success 140, unsuccess 338, failed 0 00:31:09.078 10:11:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:09.078 10:11:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:12.405 Initializing NVMe Controllers 00:31:12.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:12.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:12.405 Initialization complete. Launching workers. 00:31:12.405 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24897, failed: 0 00:31:12.405 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1206, failed to submit 23691 00:31:12.405 success 66, unsuccess 1140, failed 0 00:31:12.405 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:12.405 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:12.405 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.405 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:12.405 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:12.405 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:12.405 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99010 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 99010 ']' 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 99010 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99010 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:12.663 killing process with pid 99010 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99010' 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 99010 00:31:12.663 [2024-05-15 10:11:49.945620] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:12.663 10:11:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 99010 00:31:13.230 00:31:13.230 real 0m11.529s 00:31:13.230 user 0m46.233s 00:31:13.230 sys 0m2.434s 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.230 ************************************ 00:31:13.230 END TEST spdk_target_abort 00:31:13.230 ************************************ 00:31:13.230 10:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:13.230 10:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:13.230 10:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:13.230 10:11:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:13.230 ************************************ 00:31:13.230 START TEST kernel_target_abort 00:31:13.230 ************************************ 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:13.230 10:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:13.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:13.489 Waiting for block devices as requested 00:31:13.747 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:13.747 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:13.747 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:14.006 No valid GPT data, bailing 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n2 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:14.006 No valid GPT data, bailing 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n3 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:14.006 No valid GPT data, bailing 00:31:14.006 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:14.007 No valid GPT data, bailing 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:31:14.007 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 --hostid=8b97099d-9860-4879-a034-2bfa904443b4 -a 10.0.0.1 -t tcp -s 4420 00:31:14.266 00:31:14.266 Discovery Log Number of Records 2, Generation counter 2 00:31:14.266 =====Discovery Log Entry 0====== 00:31:14.266 trtype: tcp 00:31:14.266 adrfam: ipv4 00:31:14.266 subtype: current discovery subsystem 00:31:14.266 treq: not specified, sq flow control disable supported 00:31:14.266 portid: 1 00:31:14.266 trsvcid: 4420 00:31:14.266 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:14.266 traddr: 10.0.0.1 00:31:14.266 eflags: none 00:31:14.266 sectype: none 00:31:14.266 =====Discovery Log Entry 1====== 00:31:14.266 trtype: tcp 00:31:14.266 adrfam: ipv4 00:31:14.266 subtype: nvme subsystem 00:31:14.266 treq: not specified, sq flow control disable supported 00:31:14.266 portid: 1 00:31:14.266 trsvcid: 4420 00:31:14.266 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:14.266 traddr: 10.0.0.1 00:31:14.266 eflags: none 00:31:14.266 sectype: none 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:14.266 10:11:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:17.591 Initializing NVMe Controllers 00:31:17.591 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:17.591 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:17.591 Initialization complete. Launching workers. 00:31:17.591 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41198, failed: 0 00:31:17.591 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 41198, failed to submit 0 00:31:17.591 success 0, unsuccess 41198, failed 0 00:31:17.591 10:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:17.591 10:11:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.964 Initializing NVMe Controllers 00:31:20.964 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:20.964 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:20.964 Initialization complete. Launching workers. 00:31:20.964 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74499, failed: 0 00:31:20.964 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32001, failed to submit 42498 00:31:20.964 success 0, unsuccess 32001, failed 0 00:31:20.964 10:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:20.964 10:11:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:24.309 Initializing NVMe Controllers 00:31:24.309 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:24.309 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:24.309 Initialization complete. Launching workers. 00:31:24.309 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86066, failed: 0 00:31:24.309 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21510, failed to submit 64556 00:31:24.309 success 0, unsuccess 21510, failed 0 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:24.309 10:12:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:24.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:27.109 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:27.368 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:27.368 ************************************ 00:31:27.368 END TEST kernel_target_abort 00:31:27.368 ************************************ 00:31:27.368 00:31:27.368 real 0m14.204s 00:31:27.368 user 0m6.408s 00:31:27.368 sys 0m5.247s 00:31:27.368 10:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:27.368 10:12:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:27.368 rmmod nvme_tcp 00:31:27.368 rmmod nvme_fabrics 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99010 ']' 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99010 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 99010 ']' 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 99010 00:31:27.368 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 951: kill: (99010) - No such process 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 99010 is not found' 00:31:27.368 Process with pid 99010 is not found 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:27.368 10:12:04 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:27.934 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:27.934 Waiting for block devices as requested 00:31:27.934 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:28.218 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:28.218 ************************************ 00:31:28.218 END TEST nvmf_abort_qd_sizes 00:31:28.218 00:31:28.218 real 0m29.431s 00:31:28.218 user 0m53.937s 00:31:28.218 sys 0m9.435s 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:28.218 10:12:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:28.218 ************************************ 00:31:28.218 10:12:05 -- spdk/autotest.sh@291 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:28.218 10:12:05 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:28.218 10:12:05 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:28.218 10:12:05 -- common/autotest_common.sh@10 -- # set +x 00:31:28.218 ************************************ 00:31:28.218 START TEST keyring_file 00:31:28.218 ************************************ 00:31:28.218 10:12:05 keyring_file -- common/autotest_common.sh@1122 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:28.218 * Looking for test storage... 00:31:28.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:28.218 10:12:05 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:28.218 10:12:05 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b97099d-9860-4879-a034-2bfa904443b4 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8b97099d-9860-4879-a034-2bfa904443b4 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:28.496 10:12:05 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.496 10:12:05 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.496 10:12:05 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.496 10:12:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.496 10:12:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.496 10:12:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.496 10:12:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:28.496 10:12:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:28.496 10:12:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:28.496 10:12:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:28.496 10:12:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:28.496 10:12:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:28.496 10:12:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:28.496 10:12:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sV9bHsgjj8 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sV9bHsgjj8 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sV9bHsgjj8 00:31:28.496 10:12:05 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.sV9bHsgjj8 00:31:28.496 10:12:05 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.s0Kd6V7DFn 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:28.496 10:12:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:28.496 10:12:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.s0Kd6V7DFn 00:31:28.497 10:12:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.s0Kd6V7DFn 00:31:28.497 10:12:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.s0Kd6V7DFn 00:31:28.497 10:12:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=99903 00:31:28.497 10:12:05 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:28.497 10:12:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99903 00:31:28.497 10:12:05 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 99903 ']' 00:31:28.497 10:12:05 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.497 10:12:05 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:28.497 10:12:05 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.497 10:12:05 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:28.497 10:12:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:28.497 [2024-05-15 10:12:05.823489] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:31:28.497 [2024-05-15 10:12:05.823874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99903 ] 00:31:28.754 [2024-05-15 10:12:05.967116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.012 [2024-05-15 10:12:06.142143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:31:29.579 10:12:06 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:29.579 [2024-05-15 10:12:06.782502] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.579 null0 00:31:29.579 [2024-05-15 10:12:06.814421] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:29.579 [2024-05-15 10:12:06.814504] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:29.579 [2024-05-15 10:12:06.814746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:29.579 [2024-05-15 10:12:06.822456] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.579 10:12:06 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:29.579 [2024-05-15 10:12:06.834584] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:29.579 2024/05/15 10:12:06 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:31:29.579 request: 00:31:29.579 { 00:31:29.579 "method": "nvmf_subsystem_add_listener", 00:31:29.579 "params": { 00:31:29.579 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:29.579 "secure_channel": false, 00:31:29.579 "listen_address": { 00:31:29.579 "trtype": "tcp", 00:31:29.579 "traddr": "127.0.0.1", 00:31:29.579 "trsvcid": "4420" 00:31:29.579 } 00:31:29.579 } 00:31:29.579 } 00:31:29.579 Got JSON-RPC error response 00:31:29.579 GoRPCClient: error on JSON-RPC call 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:29.579 10:12:06 keyring_file -- keyring/file.sh@46 -- # bperfpid=99934 00:31:29.579 10:12:06 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99934 /var/tmp/bperf.sock 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 99934 ']' 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:29.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:29.579 10:12:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:29.579 10:12:06 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:29.579 [2024-05-15 10:12:06.902848] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:31:29.579 [2024-05-15 10:12:06.902964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99934 ] 00:31:29.837 [2024-05-15 10:12:07.046732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.095 [2024-05-15 10:12:07.241108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.663 10:12:07 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:30.663 10:12:07 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:31:30.663 10:12:07 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sV9bHsgjj8 00:31:30.663 10:12:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sV9bHsgjj8 00:31:30.921 10:12:08 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.s0Kd6V7DFn 00:31:30.921 10:12:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.s0Kd6V7DFn 00:31:31.191 10:12:08 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:31.191 10:12:08 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:31.191 10:12:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:31.191 10:12:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:31.191 10:12:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:31.449 10:12:08 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.sV9bHsgjj8 == \/\t\m\p\/\t\m\p\.\s\V\9\b\H\s\g\j\j\8 ]] 00:31:31.449 10:12:08 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:31.449 10:12:08 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:31.449 10:12:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:31.449 10:12:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:31.449 10:12:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:31.706 10:12:09 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.s0Kd6V7DFn == \/\t\m\p\/\t\m\p\.\s\0\K\d\6\V\7\D\F\n ]] 00:31:31.706 10:12:09 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:31.706 10:12:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:31.706 10:12:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:31.706 10:12:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:31.706 10:12:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:31.706 10:12:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:31.964 10:12:09 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:31.964 10:12:09 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:31.964 10:12:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:31.964 10:12:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:31.964 10:12:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:31.964 10:12:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:31.964 10:12:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:32.530 10:12:09 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:32.530 10:12:09 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:32.530 10:12:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:32.530 [2024-05-15 10:12:09.878729] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:32.788 nvme0n1 00:31:32.788 10:12:09 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:32.788 10:12:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:32.788 10:12:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:32.788 10:12:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:32.788 10:12:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:32.788 10:12:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.047 10:12:10 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:33.047 10:12:10 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:33.047 10:12:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:33.047 10:12:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:33.047 10:12:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:33.047 10:12:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:33.047 10:12:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.306 10:12:10 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:33.306 10:12:10 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:33.306 Running I/O for 1 seconds... 00:31:34.682 00:31:34.682 Latency(us) 00:31:34.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.682 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:34.682 nvme0n1 : 1.01 14236.62 55.61 0.00 0.00 8962.94 5086.84 15978.30 00:31:34.682 =================================================================================================================== 00:31:34.682 Total : 14236.62 55.61 0.00 0.00 8962.94 5086.84 15978.30 00:31:34.682 0 00:31:34.682 10:12:11 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:34.682 10:12:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:34.682 10:12:11 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:34.682 10:12:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:34.682 10:12:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:34.682 10:12:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:34.682 10:12:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:34.682 10:12:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:34.940 10:12:12 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:34.940 10:12:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:34.940 10:12:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:34.940 10:12:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:34.940 10:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:34.940 10:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:34.940 10:12:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:35.507 10:12:12 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:35.507 10:12:12 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:35.507 10:12:12 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:31:35.507 10:12:12 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:35.507 10:12:12 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:31:35.507 10:12:12 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:35.507 10:12:12 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:31:35.507 10:12:12 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:35.507 10:12:12 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:35.507 10:12:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:35.765 [2024-05-15 10:12:12.921299] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:35.765 [2024-05-15 10:12:12.922286] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbdfc0 (107): Transport endpoint is not connected 00:31:35.765 [2024-05-15 10:12:12.923239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbdfc0 (9): Bad file descriptor 00:31:35.765 [2024-05-15 10:12:12.924236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:35.765 [2024-05-15 10:12:12.924477] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:35.765 [2024-05-15 10:12:12.924657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:35.765 2024/05/15 10:12:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:31:35.765 request: 00:31:35.765 { 00:31:35.765 "method": "bdev_nvme_attach_controller", 00:31:35.765 "params": { 00:31:35.765 "name": "nvme0", 00:31:35.765 "trtype": "tcp", 00:31:35.765 "traddr": "127.0.0.1", 00:31:35.765 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.765 "adrfam": "ipv4", 00:31:35.765 "trsvcid": "4420", 00:31:35.765 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.765 "psk": "key1" 00:31:35.765 } 00:31:35.765 } 00:31:35.765 Got JSON-RPC error response 00:31:35.765 GoRPCClient: error on JSON-RPC call 00:31:35.765 10:12:12 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:31:35.765 10:12:12 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:35.765 10:12:12 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:35.765 10:12:12 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:35.765 10:12:12 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:35.765 10:12:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:35.765 10:12:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:35.765 10:12:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:35.765 10:12:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:35.765 10:12:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:36.022 10:12:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:36.022 10:12:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:36.022 10:12:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:36.022 10:12:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:36.022 10:12:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:36.022 10:12:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:36.022 10:12:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:36.280 10:12:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:36.280 10:12:13 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:36.280 10:12:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:36.537 10:12:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:36.537 10:12:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:36.796 10:12:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:36.796 10:12:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:36.796 10:12:14 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:37.054 10:12:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:37.054 10:12:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.sV9bHsgjj8 00:31:37.054 10:12:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.sV9bHsgjj8 00:31:37.054 10:12:14 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:31:37.054 10:12:14 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.sV9bHsgjj8 00:31:37.054 10:12:14 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:31:37.054 10:12:14 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:37.054 10:12:14 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:31:37.054 10:12:14 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:37.054 10:12:14 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sV9bHsgjj8 00:31:37.054 10:12:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sV9bHsgjj8 00:31:37.313 [2024-05-15 10:12:14.554645] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.sV9bHsgjj8': 0100660 00:31:37.313 [2024-05-15 10:12:14.555227] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:37.313 2024/05/15 10:12:14 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.sV9bHsgjj8], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:31:37.313 request: 00:31:37.313 { 00:31:37.313 "method": "keyring_file_add_key", 00:31:37.313 "params": { 00:31:37.313 "name": "key0", 00:31:37.313 "path": "/tmp/tmp.sV9bHsgjj8" 00:31:37.313 } 00:31:37.313 } 00:31:37.313 Got JSON-RPC error response 00:31:37.313 GoRPCClient: error on JSON-RPC call 00:31:37.313 10:12:14 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:31:37.313 10:12:14 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:37.313 10:12:14 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:37.313 10:12:14 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:37.313 10:12:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.sV9bHsgjj8 00:31:37.313 10:12:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sV9bHsgjj8 00:31:37.313 10:12:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sV9bHsgjj8 00:31:37.571 10:12:14 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.sV9bHsgjj8 00:31:37.571 10:12:14 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:37.571 10:12:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:37.571 10:12:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:37.571 10:12:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:37.571 10:12:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:37.571 10:12:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:37.830 10:12:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:37.830 10:12:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:37.830 10:12:15 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:31:37.830 10:12:15 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:37.830 10:12:15 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:31:37.830 10:12:15 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:37.830 10:12:15 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:31:37.830 10:12:15 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:31:37.830 10:12:15 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:37.830 10:12:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:38.089 [2024-05-15 10:12:15.390872] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.sV9bHsgjj8': No such file or directory 00:31:38.089 [2024-05-15 10:12:15.391462] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:38.089 [2024-05-15 10:12:15.391749] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:38.089 [2024-05-15 10:12:15.391993] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:38.089 [2024-05-15 10:12:15.392202] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:38.089 2024/05/15 10:12:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:31:38.089 request: 00:31:38.089 { 00:31:38.089 "method": "bdev_nvme_attach_controller", 00:31:38.089 "params": { 00:31:38.089 "name": "nvme0", 00:31:38.089 "trtype": "tcp", 00:31:38.089 "traddr": "127.0.0.1", 00:31:38.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:38.089 "adrfam": "ipv4", 00:31:38.089 "trsvcid": "4420", 00:31:38.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.089 "psk": "key0" 00:31:38.089 } 00:31:38.089 } 00:31:38.089 Got JSON-RPC error response 00:31:38.089 GoRPCClient: error on JSON-RPC call 00:31:38.089 10:12:15 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:31:38.089 10:12:15 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:31:38.089 10:12:15 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:31:38.089 10:12:15 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:31:38.089 10:12:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:38.089 10:12:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:38.347 10:12:15 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:38.347 10:12:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:38.347 10:12:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:38.347 10:12:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:38.348 10:12:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:38.348 10:12:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:38.348 10:12:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nRYYfpCoxW 00:31:38.348 10:12:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:38.348 10:12:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:38.348 10:12:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:38.348 10:12:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:38.348 10:12:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:38.348 10:12:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:38.348 10:12:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:38.348 10:12:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nRYYfpCoxW 00:31:38.606 10:12:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nRYYfpCoxW 00:31:38.606 10:12:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.nRYYfpCoxW 00:31:38.606 10:12:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nRYYfpCoxW 00:31:38.606 10:12:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nRYYfpCoxW 00:31:38.865 10:12:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:38.865 10:12:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:39.124 nvme0n1 00:31:39.124 10:12:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:39.124 10:12:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:39.124 10:12:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:39.124 10:12:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:39.124 10:12:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:39.124 10:12:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:39.691 10:12:16 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:39.691 10:12:16 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:39.691 10:12:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:39.948 10:12:17 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:39.948 10:12:17 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:39.948 10:12:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:39.948 10:12:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:39.948 10:12:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:40.206 10:12:17 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:40.206 10:12:17 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:40.207 10:12:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:40.207 10:12:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:40.207 10:12:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:40.207 10:12:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:40.207 10:12:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:40.465 10:12:17 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:40.465 10:12:17 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:40.465 10:12:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:40.723 10:12:18 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:40.723 10:12:18 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:40.723 10:12:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:40.981 10:12:18 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:40.981 10:12:18 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nRYYfpCoxW 00:31:40.981 10:12:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nRYYfpCoxW 00:31:41.240 10:12:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.s0Kd6V7DFn 00:31:41.240 10:12:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.s0Kd6V7DFn 00:31:41.499 10:12:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:41.499 10:12:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:42.066 nvme0n1 00:31:42.066 10:12:19 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:42.066 10:12:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:42.325 10:12:19 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:42.325 "subsystems": [ 00:31:42.325 { 00:31:42.325 "subsystem": "keyring", 00:31:42.325 "config": [ 00:31:42.325 { 00:31:42.325 "method": "keyring_file_add_key", 00:31:42.325 "params": { 00:31:42.325 "name": "key0", 00:31:42.325 "path": "/tmp/tmp.nRYYfpCoxW" 00:31:42.325 } 00:31:42.325 }, 00:31:42.325 { 00:31:42.325 "method": "keyring_file_add_key", 00:31:42.325 "params": { 00:31:42.325 "name": "key1", 00:31:42.325 "path": "/tmp/tmp.s0Kd6V7DFn" 00:31:42.325 } 00:31:42.325 } 00:31:42.325 ] 00:31:42.325 }, 00:31:42.325 { 00:31:42.325 "subsystem": "iobuf", 00:31:42.325 "config": [ 00:31:42.325 { 00:31:42.325 "method": "iobuf_set_options", 00:31:42.325 "params": { 00:31:42.325 "large_bufsize": 135168, 00:31:42.325 "large_pool_count": 1024, 00:31:42.325 "small_bufsize": 8192, 00:31:42.325 "small_pool_count": 8192 00:31:42.325 } 00:31:42.325 } 00:31:42.325 ] 00:31:42.325 }, 00:31:42.325 { 00:31:42.325 "subsystem": "sock", 00:31:42.325 "config": [ 00:31:42.325 { 00:31:42.325 "method": "sock_impl_set_options", 00:31:42.325 "params": { 00:31:42.325 "enable_ktls": false, 00:31:42.325 "enable_placement_id": 0, 00:31:42.325 "enable_quickack": false, 00:31:42.325 "enable_recv_pipe": true, 00:31:42.325 "enable_zerocopy_send_client": false, 00:31:42.325 "enable_zerocopy_send_server": true, 00:31:42.325 "impl_name": "posix", 00:31:42.325 "recv_buf_size": 2097152, 00:31:42.325 "send_buf_size": 2097152, 00:31:42.325 "tls_version": 0, 00:31:42.325 "zerocopy_threshold": 0 00:31:42.325 } 00:31:42.325 }, 00:31:42.325 { 00:31:42.325 "method": "sock_impl_set_options", 00:31:42.325 "params": { 00:31:42.325 "enable_ktls": false, 00:31:42.325 "enable_placement_id": 0, 00:31:42.325 "enable_quickack": false, 00:31:42.325 "enable_recv_pipe": true, 00:31:42.325 "enable_zerocopy_send_client": false, 00:31:42.325 "enable_zerocopy_send_server": true, 00:31:42.325 "impl_name": "ssl", 00:31:42.325 "recv_buf_size": 4096, 00:31:42.325 "send_buf_size": 4096, 00:31:42.325 "tls_version": 0, 00:31:42.325 "zerocopy_threshold": 0 00:31:42.325 } 00:31:42.325 } 00:31:42.325 ] 00:31:42.325 }, 00:31:42.325 { 00:31:42.325 "subsystem": "vmd", 00:31:42.325 "config": [] 00:31:42.325 }, 00:31:42.325 { 00:31:42.325 "subsystem": "accel", 00:31:42.325 "config": [ 00:31:42.325 { 00:31:42.326 "method": "accel_set_options", 00:31:42.326 "params": { 00:31:42.326 "buf_count": 2048, 00:31:42.326 "large_cache_size": 16, 00:31:42.326 "sequence_count": 2048, 00:31:42.326 "small_cache_size": 128, 00:31:42.326 "task_count": 2048 00:31:42.326 } 00:31:42.326 } 00:31:42.326 ] 00:31:42.326 }, 00:31:42.326 { 00:31:42.326 "subsystem": "bdev", 00:31:42.326 "config": [ 00:31:42.326 { 00:31:42.326 "method": "bdev_set_options", 00:31:42.326 "params": { 00:31:42.326 "bdev_auto_examine": true, 00:31:42.326 "bdev_io_cache_size": 256, 00:31:42.326 "bdev_io_pool_size": 65535, 00:31:42.326 "iobuf_large_cache_size": 16, 00:31:42.326 "iobuf_small_cache_size": 128 00:31:42.326 } 00:31:42.326 }, 00:31:42.326 { 00:31:42.326 "method": "bdev_raid_set_options", 00:31:42.326 "params": { 00:31:42.326 "process_window_size_kb": 1024 00:31:42.326 } 00:31:42.326 }, 00:31:42.326 { 00:31:42.326 "method": "bdev_iscsi_set_options", 00:31:42.326 "params": { 00:31:42.326 "timeout_sec": 30 00:31:42.326 } 00:31:42.326 }, 00:31:42.326 { 00:31:42.326 "method": "bdev_nvme_set_options", 00:31:42.326 "params": { 00:31:42.326 "action_on_timeout": "none", 00:31:42.326 "allow_accel_sequence": false, 00:31:42.326 "arbitration_burst": 0, 00:31:42.326 "bdev_retry_count": 3, 00:31:42.326 "ctrlr_loss_timeout_sec": 0, 00:31:42.326 "delay_cmd_submit": true, 00:31:42.326 "dhchap_dhgroups": [ 00:31:42.326 "null", 00:31:42.326 "ffdhe2048", 00:31:42.326 "ffdhe3072", 00:31:42.326 "ffdhe4096", 00:31:42.326 "ffdhe6144", 00:31:42.326 "ffdhe8192" 00:31:42.326 ], 00:31:42.326 "dhchap_digests": [ 00:31:42.326 "sha256", 00:31:42.326 "sha384", 00:31:42.326 "sha512" 00:31:42.326 ], 00:31:42.326 "disable_auto_failback": false, 00:31:42.326 "fast_io_fail_timeout_sec": 0, 00:31:42.326 "generate_uuids": false, 00:31:42.326 "high_priority_weight": 0, 00:31:42.326 "io_path_stat": false, 00:31:42.326 "io_queue_requests": 512, 00:31:42.326 "keep_alive_timeout_ms": 10000, 00:31:42.326 "low_priority_weight": 0, 00:31:42.326 "medium_priority_weight": 0, 00:31:42.326 "nvme_adminq_poll_period_us": 10000, 00:31:42.326 "nvme_error_stat": false, 00:31:42.326 "nvme_ioq_poll_period_us": 0, 00:31:42.326 "rdma_cm_event_timeout_ms": 0, 00:31:42.326 "rdma_max_cq_size": 0, 00:31:42.326 "rdma_srq_size": 0, 00:31:42.326 "reconnect_delay_sec": 0, 00:31:42.326 "timeout_admin_us": 0, 00:31:42.326 "timeout_us": 0, 00:31:42.326 "transport_ack_timeout": 0, 00:31:42.326 "transport_retry_count": 4, 00:31:42.326 "transport_tos": 0 00:31:42.326 } 00:31:42.326 }, 00:31:42.326 { 00:31:42.326 "method": "bdev_nvme_attach_controller", 00:31:42.326 "params": { 00:31:42.326 "adrfam": "IPv4", 00:31:42.326 "ctrlr_loss_timeout_sec": 0, 00:31:42.326 "ddgst": false, 00:31:42.326 "fast_io_fail_timeout_sec": 0, 00:31:42.326 "hdgst": false, 00:31:42.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.326 "name": "nvme0", 00:31:42.326 "prchk_guard": false, 00:31:42.326 "prchk_reftag": false, 00:31:42.326 "psk": "key0", 00:31:42.326 "reconnect_delay_sec": 0, 00:31:42.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.326 "traddr": "127.0.0.1", 00:31:42.326 "trsvcid": "4420", 00:31:42.326 "trtype": "TCP" 00:31:42.326 } 00:31:42.326 }, 00:31:42.326 { 00:31:42.326 "method": "bdev_nvme_set_hotplug", 00:31:42.326 "params": { 00:31:42.326 "enable": false, 00:31:42.326 "period_us": 100000 00:31:42.326 } 00:31:42.326 }, 00:31:42.326 { 00:31:42.326 "method": "bdev_wait_for_examine" 00:31:42.326 } 00:31:42.326 ] 00:31:42.326 }, 00:31:42.326 { 00:31:42.326 "subsystem": "nbd", 00:31:42.326 "config": [] 00:31:42.326 } 00:31:42.326 ] 00:31:42.326 }' 00:31:42.326 10:12:19 keyring_file -- keyring/file.sh@114 -- # killprocess 99934 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 99934 ']' 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@951 -- # kill -0 99934 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@952 -- # uname 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99934 00:31:42.326 killing process with pid 99934 00:31:42.326 Received shutdown signal, test time was about 1.000000 seconds 00:31:42.326 00:31:42.326 Latency(us) 00:31:42.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.326 =================================================================================================================== 00:31:42.326 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99934' 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@966 -- # kill 99934 00:31:42.326 10:12:19 keyring_file -- common/autotest_common.sh@971 -- # wait 99934 00:31:42.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:42.584 10:12:19 keyring_file -- keyring/file.sh@117 -- # bperfpid=100422 00:31:42.584 10:12:19 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100422 /var/tmp/bperf.sock 00:31:42.584 10:12:19 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 100422 ']' 00:31:42.584 10:12:19 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:42.584 10:12:19 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:31:42.584 10:12:19 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:42.584 10:12:19 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:31:42.584 10:12:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:42.584 10:12:19 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:42.584 "subsystems": [ 00:31:42.584 { 00:31:42.584 "subsystem": "keyring", 00:31:42.584 "config": [ 00:31:42.584 { 00:31:42.584 "method": "keyring_file_add_key", 00:31:42.584 "params": { 00:31:42.584 "name": "key0", 00:31:42.584 "path": "/tmp/tmp.nRYYfpCoxW" 00:31:42.584 } 00:31:42.584 }, 00:31:42.584 { 00:31:42.584 "method": "keyring_file_add_key", 00:31:42.584 "params": { 00:31:42.584 "name": "key1", 00:31:42.584 "path": "/tmp/tmp.s0Kd6V7DFn" 00:31:42.584 } 00:31:42.584 } 00:31:42.584 ] 00:31:42.584 }, 00:31:42.584 { 00:31:42.584 "subsystem": "iobuf", 00:31:42.584 "config": [ 00:31:42.584 { 00:31:42.584 "method": "iobuf_set_options", 00:31:42.584 "params": { 00:31:42.584 "large_bufsize": 135168, 00:31:42.584 "large_pool_count": 1024, 00:31:42.584 "small_bufsize": 8192, 00:31:42.584 "small_pool_count": 8192 00:31:42.584 } 00:31:42.584 } 00:31:42.584 ] 00:31:42.584 }, 00:31:42.584 { 00:31:42.584 "subsystem": "sock", 00:31:42.584 "config": [ 00:31:42.584 { 00:31:42.584 "method": "sock_impl_set_options", 00:31:42.584 "params": { 00:31:42.584 "enable_ktls": false, 00:31:42.584 "enable_placement_id": 0, 00:31:42.584 "enable_quickack": false, 00:31:42.584 "enable_recv_pipe": true, 00:31:42.584 "enable_zerocopy_send_client": false, 00:31:42.584 "enable_zerocopy_send_server": true, 00:31:42.584 "impl_name": "posix", 00:31:42.584 "recv_buf_size": 2097152, 00:31:42.584 "send_buf_size": 2097152, 00:31:42.584 "tls_version": 0, 00:31:42.584 "zerocopy_threshold": 0 00:31:42.584 } 00:31:42.584 }, 00:31:42.584 { 00:31:42.584 "method": "sock_impl_set_options", 00:31:42.584 "params": { 00:31:42.584 "enable_ktls": false, 00:31:42.584 "enable_placement_id": 0, 00:31:42.584 "enable_quickack": false, 00:31:42.584 "enable_recv_pipe": true, 00:31:42.584 "enable_zerocopy_send_client": false, 00:31:42.584 "enable_zerocopy_send_server": true, 00:31:42.584 "impl_name": "ssl", 00:31:42.584 "recv_buf_size": 4096, 00:31:42.584 "send_buf_size": 4096, 00:31:42.584 "tls_version": 0, 00:31:42.584 "zerocopy_threshold": 0 00:31:42.584 } 00:31:42.584 } 00:31:42.584 ] 00:31:42.584 }, 00:31:42.584 { 00:31:42.585 "subsystem": "vmd", 00:31:42.585 "config": [] 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "subsystem": "accel", 00:31:42.585 "config": [ 00:31:42.585 { 00:31:42.585 "method": "accel_set_options", 00:31:42.585 "params": { 00:31:42.585 "buf_count": 2048, 00:31:42.585 "large_cache_size": 16, 00:31:42.585 "sequence_count": 2048, 00:31:42.585 "small_cache_size": 128, 00:31:42.585 "task_count": 2048 00:31:42.585 } 00:31:42.585 } 00:31:42.585 ] 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "subsystem": "bdev", 00:31:42.585 "config": [ 00:31:42.585 { 00:31:42.585 "method": "bdev_set_options", 00:31:42.585 "params": { 00:31:42.585 "bdev_auto_examine": true, 00:31:42.585 "bdev_io_cache_size": 256, 00:31:42.585 "bdev_io_pool_size": 65535, 00:31:42.585 "iobuf_large_cache_size": 16, 00:31:42.585 "iobuf_small_cache_size": 128 00:31:42.585 } 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "method": "bdev_raid_set_options", 00:31:42.585 "params": { 00:31:42.585 "process_window_size_kb": 1024 00:31:42.585 } 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "method": "bdev_iscsi_set_options", 00:31:42.585 "params": { 00:31:42.585 "timeout_sec": 30 00:31:42.585 } 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "method": "bdev_nvme_set_options", 00:31:42.585 "params": { 00:31:42.585 "action_on_timeout": "none", 00:31:42.585 "allow_accel_sequence": false, 00:31:42.585 "arbitration_burst": 0, 00:31:42.585 "bdev_retry_count": 3, 00:31:42.585 "ctrlr_loss_timeout_sec": 0, 00:31:42.585 "delay_cmd_submit": true, 00:31:42.585 "dhchap_dhgroups": [ 00:31:42.585 "null", 00:31:42.585 "ffdhe2048", 00:31:42.585 "ffdhe3072", 00:31:42.585 "ffdhe4096", 00:31:42.585 "ffdhe6144", 00:31:42.585 "ffdhe8192" 00:31:42.585 ], 00:31:42.585 "dhchap_digests": [ 00:31:42.585 "sha256", 00:31:42.585 "sha384", 00:31:42.585 "sha512" 00:31:42.585 ], 00:31:42.585 "disable_auto_failback": false, 00:31:42.585 "fast_io_fail_timeout_sec": 0, 00:31:42.585 "generate_uuids": false, 00:31:42.585 "high_priority_weight": 0, 00:31:42.585 "io_path_stat": false, 00:31:42.585 "io_queue_requests": 512, 00:31:42.585 "keep_alive_timeout_ms": 10000, 00:31:42.585 "low_priority_weight": 0, 00:31:42.585 "medium_priority_weight": 0, 00:31:42.585 "nvme_adminq_poll_period_us": 10000, 00:31:42.585 "nvme_error_stat": false, 00:31:42.585 "nvme_ioq_poll_period_us": 0, 00:31:42.585 "rdma_cm_event_timeout_ms": 0, 00:31:42.585 "rdma_max_cq_size": 0, 00:31:42.585 "rdma_srq_size": 0, 00:31:42.585 "reconnect_delay_sec": 0, 00:31:42.585 "timeout_admin_us": 0, 00:31:42.585 "timeout_us": 0, 00:31:42.585 "transport_ack_timeout": 0, 00:31:42.585 "transport_retry_count": 4, 00:31:42.585 "transport_tos": 0 00:31:42.585 } 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "method": "bdev_nvme_attach_controller", 00:31:42.585 "params": { 00:31:42.585 "adrfam": "IPv4", 00:31:42.585 "ctrlr_loss_timeout_sec": 0, 00:31:42.585 "ddgst": false, 00:31:42.585 "fast_io_fail_timeout_sec": 0, 00:31:42.585 "hdgst": false, 00:31:42.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.585 "name": "nvme0", 00:31:42.585 "prchk_guard": false, 00:31:42.585 "prchk_reftag": false, 00:31:42.585 "psk": "key0", 00:31:42.585 "reconnect_delay_sec": 0, 00:31:42.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.585 "traddr": "127.0.0.1", 00:31:42.585 "trsvcid": "4420", 00:31:42.585 "trtype": "TCP" 00:31:42.585 } 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "method": "bdev_nvme_set_hotplug", 00:31:42.585 "params": { 00:31:42.585 "enable": false, 00:31:42.585 "period_us": 100000 00:31:42.585 } 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "method": "bdev_wait_for_examine" 00:31:42.585 } 00:31:42.585 ] 00:31:42.585 }, 00:31:42.585 { 00:31:42.585 "subsystem": "nbd", 00:31:42.585 "config": [] 00:31:42.585 } 00:31:42.585 ] 00:31:42.585 }' 00:31:42.585 10:12:19 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:42.843 [2024-05-15 10:12:20.013738] Starting SPDK v24.05-pre git sha1 567565736 / DPDK 23.11.0 initialization... 00:31:42.843 [2024-05-15 10:12:20.014143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100422 ] 00:31:42.843 [2024-05-15 10:12:20.155123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.102 [2024-05-15 10:12:20.319337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.361 [2024-05-15 10:12:20.538172] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:43.928 10:12:21 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:31:43.928 10:12:21 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:31:43.928 10:12:21 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:43.928 10:12:21 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:43.928 10:12:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.207 10:12:21 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:44.207 10:12:21 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:44.207 10:12:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:44.207 10:12:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:44.207 10:12:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:44.207 10:12:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.207 10:12:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:44.465 10:12:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:44.465 10:12:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:44.465 10:12:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:44.465 10:12:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:44.465 10:12:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:44.465 10:12:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:44.465 10:12:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:44.723 10:12:21 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:44.723 10:12:21 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:44.723 10:12:21 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:44.723 10:12:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:44.981 10:12:22 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:44.981 10:12:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:44.981 10:12:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nRYYfpCoxW /tmp/tmp.s0Kd6V7DFn 00:31:44.981 10:12:22 keyring_file -- keyring/file.sh@20 -- # killprocess 100422 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 100422 ']' 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@951 -- # kill -0 100422 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@952 -- # uname 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 100422 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 100422' 00:31:44.981 killing process with pid 100422 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@966 -- # kill 100422 00:31:44.981 Received shutdown signal, test time was about 1.000000 seconds 00:31:44.981 00:31:44.981 Latency(us) 00:31:44.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.981 =================================================================================================================== 00:31:44.981 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:44.981 10:12:22 keyring_file -- common/autotest_common.sh@971 -- # wait 100422 00:31:45.547 10:12:22 keyring_file -- keyring/file.sh@21 -- # killprocess 99903 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 99903 ']' 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@951 -- # kill -0 99903 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@952 -- # uname 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 99903 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 99903' 00:31:45.547 killing process with pid 99903 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@966 -- # kill 99903 00:31:45.547 [2024-05-15 10:12:22.684390] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:45.547 10:12:22 keyring_file -- common/autotest_common.sh@971 -- # wait 99903 00:31:45.547 [2024-05-15 10:12:22.684515] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:46.114 00:31:46.114 real 0m17.843s 00:31:46.114 user 0m43.009s 00:31:46.114 sys 0m4.510s 00:31:46.114 ************************************ 00:31:46.114 END TEST keyring_file 00:31:46.114 ************************************ 00:31:46.114 10:12:23 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:46.114 10:12:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:46.114 10:12:23 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:31:46.114 10:12:23 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:46.114 10:12:23 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:31:46.114 10:12:23 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:46.114 10:12:23 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:46.114 10:12:23 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:46.114 10:12:23 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:31:46.114 10:12:23 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:31:46.114 10:12:23 -- common/autotest_common.sh@721 -- # xtrace_disable 00:31:46.114 10:12:23 -- common/autotest_common.sh@10 -- # set +x 00:31:46.114 10:12:23 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:31:46.114 10:12:23 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:31:46.114 10:12:23 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:31:46.114 10:12:23 -- common/autotest_common.sh@10 -- # set +x 00:31:48.015 INFO: APP EXITING 00:31:48.015 INFO: killing all VMs 00:31:48.015 INFO: killing vhost app 00:31:48.015 INFO: EXIT DONE 00:31:48.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:48.583 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:48.583 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:49.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:49.518 Cleaning 00:31:49.518 Removing: /var/run/dpdk/spdk0/config 00:31:49.518 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:49.518 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:49.518 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:49.518 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:49.518 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:49.518 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:49.518 Removing: /var/run/dpdk/spdk1/config 00:31:49.518 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:49.518 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:49.518 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:49.518 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:49.518 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:49.518 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:49.518 Removing: /var/run/dpdk/spdk2/config 00:31:49.518 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:49.518 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:49.518 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:49.518 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:49.518 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:49.518 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:49.518 Removing: /var/run/dpdk/spdk3/config 00:31:49.518 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:49.518 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:49.518 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:49.518 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:49.518 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:49.518 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:49.518 Removing: /var/run/dpdk/spdk4/config 00:31:49.518 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:49.518 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:49.518 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:49.518 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:49.518 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:49.518 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:49.518 Removing: /dev/shm/nvmf_trace.0 00:31:49.518 Removing: /dev/shm/spdk_tgt_trace.pid59933 00:31:49.518 Removing: /var/run/dpdk/spdk0 00:31:49.518 Removing: /var/run/dpdk/spdk1 00:31:49.518 Removing: /var/run/dpdk/spdk2 00:31:49.518 Removing: /var/run/dpdk/spdk3 00:31:49.518 Removing: /var/run/dpdk/spdk4 00:31:49.518 Removing: /var/run/dpdk/spdk_pid100422 00:31:49.518 Removing: /var/run/dpdk/spdk_pid59777 00:31:49.518 Removing: /var/run/dpdk/spdk_pid59933 00:31:49.518 Removing: /var/run/dpdk/spdk_pid60195 00:31:49.518 Removing: /var/run/dpdk/spdk_pid60293 00:31:49.518 Removing: /var/run/dpdk/spdk_pid60338 00:31:49.518 Removing: /var/run/dpdk/spdk_pid60453 00:31:49.518 Removing: /var/run/dpdk/spdk_pid60483 00:31:49.518 Removing: /var/run/dpdk/spdk_pid60607 00:31:49.518 Removing: /var/run/dpdk/spdk_pid60892 00:31:49.518 Removing: /var/run/dpdk/spdk_pid61075 00:31:49.518 Removing: /var/run/dpdk/spdk_pid61151 00:31:49.518 Removing: /var/run/dpdk/spdk_pid61249 00:31:49.518 Removing: /var/run/dpdk/spdk_pid61344 00:31:49.518 Removing: /var/run/dpdk/spdk_pid61377 00:31:49.518 Removing: /var/run/dpdk/spdk_pid61418 00:31:49.518 Removing: /var/run/dpdk/spdk_pid61479 00:31:49.518 Removing: /var/run/dpdk/spdk_pid61608 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62260 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62325 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62400 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62428 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62520 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62548 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62639 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62667 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62724 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62754 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62807 00:31:49.518 Removing: /var/run/dpdk/spdk_pid62837 00:31:49.778 Removing: /var/run/dpdk/spdk_pid62994 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63030 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63104 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63180 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63205 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63269 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63309 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63343 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63378 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63418 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63452 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63487 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63527 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63567 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63596 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63636 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63676 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63711 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63745 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63785 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63820 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63863 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63900 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63938 00:31:49.778 Removing: /var/run/dpdk/spdk_pid63978 00:31:49.778 Removing: /var/run/dpdk/spdk_pid64019 00:31:49.778 Removing: /var/run/dpdk/spdk_pid64089 00:31:49.778 Removing: /var/run/dpdk/spdk_pid64200 00:31:49.778 Removing: /var/run/dpdk/spdk_pid64613 00:31:49.778 Removing: /var/run/dpdk/spdk_pid67989 00:31:49.778 Removing: /var/run/dpdk/spdk_pid68341 00:31:49.778 Removing: /var/run/dpdk/spdk_pid70810 00:31:49.778 Removing: /var/run/dpdk/spdk_pid71186 00:31:49.778 Removing: /var/run/dpdk/spdk_pid71456 00:31:49.778 Removing: /var/run/dpdk/spdk_pid71502 00:31:49.778 Removing: /var/run/dpdk/spdk_pid72397 00:31:49.778 Removing: /var/run/dpdk/spdk_pid72447 00:31:49.778 Removing: /var/run/dpdk/spdk_pid72815 00:31:49.778 Removing: /var/run/dpdk/spdk_pid73349 00:31:49.778 Removing: /var/run/dpdk/spdk_pid73796 00:31:49.778 Removing: /var/run/dpdk/spdk_pid74785 00:31:49.778 Removing: /var/run/dpdk/spdk_pid75776 00:31:49.778 Removing: /var/run/dpdk/spdk_pid75901 00:31:49.778 Removing: /var/run/dpdk/spdk_pid75970 00:31:49.778 Removing: /var/run/dpdk/spdk_pid77481 00:31:49.778 Removing: /var/run/dpdk/spdk_pid77705 00:31:49.778 Removing: /var/run/dpdk/spdk_pid82833 00:31:49.778 Removing: /var/run/dpdk/spdk_pid83276 00:31:49.778 Removing: /var/run/dpdk/spdk_pid83387 00:31:49.778 Removing: /var/run/dpdk/spdk_pid83524 00:31:49.778 Removing: /var/run/dpdk/spdk_pid83570 00:31:49.778 Removing: /var/run/dpdk/spdk_pid83621 00:31:49.778 Removing: /var/run/dpdk/spdk_pid83667 00:31:49.778 Removing: /var/run/dpdk/spdk_pid83836 00:31:49.778 Removing: /var/run/dpdk/spdk_pid83989 00:31:49.778 Removing: /var/run/dpdk/spdk_pid84265 00:31:49.778 Removing: /var/run/dpdk/spdk_pid84388 00:31:49.778 Removing: /var/run/dpdk/spdk_pid84644 00:31:49.778 Removing: /var/run/dpdk/spdk_pid84775 00:31:49.778 Removing: /var/run/dpdk/spdk_pid84911 00:31:49.778 Removing: /var/run/dpdk/spdk_pid85259 00:31:49.778 Removing: /var/run/dpdk/spdk_pid85674 00:31:49.778 Removing: /var/run/dpdk/spdk_pid85985 00:31:49.778 Removing: /var/run/dpdk/spdk_pid86486 00:31:49.778 Removing: /var/run/dpdk/spdk_pid86489 00:31:49.778 Removing: /var/run/dpdk/spdk_pid86835 00:31:49.778 Removing: /var/run/dpdk/spdk_pid86855 00:31:49.778 Removing: /var/run/dpdk/spdk_pid86870 00:31:49.778 Removing: /var/run/dpdk/spdk_pid86906 00:31:49.778 Removing: /var/run/dpdk/spdk_pid86917 00:31:49.778 Removing: /var/run/dpdk/spdk_pid87223 00:31:49.778 Removing: /var/run/dpdk/spdk_pid87272 00:31:50.037 Removing: /var/run/dpdk/spdk_pid87610 00:31:50.037 Removing: /var/run/dpdk/spdk_pid87862 00:31:50.037 Removing: /var/run/dpdk/spdk_pid88365 00:31:50.037 Removing: /var/run/dpdk/spdk_pid88948 00:31:50.037 Removing: /var/run/dpdk/spdk_pid90332 00:31:50.037 Removing: /var/run/dpdk/spdk_pid90928 00:31:50.037 Removing: /var/run/dpdk/spdk_pid90930 00:31:50.037 Removing: /var/run/dpdk/spdk_pid92875 00:31:50.037 Removing: /var/run/dpdk/spdk_pid92965 00:31:50.037 Removing: /var/run/dpdk/spdk_pid93061 00:31:50.037 Removing: /var/run/dpdk/spdk_pid93158 00:31:50.037 Removing: /var/run/dpdk/spdk_pid93321 00:31:50.037 Removing: /var/run/dpdk/spdk_pid93411 00:31:50.037 Removing: /var/run/dpdk/spdk_pid93503 00:31:50.037 Removing: /var/run/dpdk/spdk_pid93593 00:31:50.037 Removing: /var/run/dpdk/spdk_pid93943 00:31:50.037 Removing: /var/run/dpdk/spdk_pid94643 00:31:50.037 Removing: /var/run/dpdk/spdk_pid96007 00:31:50.037 Removing: /var/run/dpdk/spdk_pid96213 00:31:50.037 Removing: /var/run/dpdk/spdk_pid96504 00:31:50.037 Removing: /var/run/dpdk/spdk_pid96803 00:31:50.037 Removing: /var/run/dpdk/spdk_pid97363 00:31:50.037 Removing: /var/run/dpdk/spdk_pid97369 00:31:50.037 Removing: /var/run/dpdk/spdk_pid97726 00:31:50.037 Removing: /var/run/dpdk/spdk_pid97885 00:31:50.037 Removing: /var/run/dpdk/spdk_pid98048 00:31:50.037 Removing: /var/run/dpdk/spdk_pid98145 00:31:50.037 Removing: /var/run/dpdk/spdk_pid98294 00:31:50.037 Removing: /var/run/dpdk/spdk_pid98408 00:31:50.037 Removing: /var/run/dpdk/spdk_pid99080 00:31:50.037 Removing: /var/run/dpdk/spdk_pid99115 00:31:50.037 Removing: /var/run/dpdk/spdk_pid99154 00:31:50.037 Removing: /var/run/dpdk/spdk_pid99409 00:31:50.037 Removing: /var/run/dpdk/spdk_pid99440 00:31:50.037 Removing: /var/run/dpdk/spdk_pid99474 00:31:50.037 Removing: /var/run/dpdk/spdk_pid99903 00:31:50.037 Removing: /var/run/dpdk/spdk_pid99934 00:31:50.037 Clean 00:31:50.037 10:12:27 -- common/autotest_common.sh@1448 -- # return 0 00:31:50.037 10:12:27 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:31:50.037 10:12:27 -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:50.037 10:12:27 -- common/autotest_common.sh@10 -- # set +x 00:31:50.335 10:12:27 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:31:50.335 10:12:27 -- common/autotest_common.sh@727 -- # xtrace_disable 00:31:50.335 10:12:27 -- common/autotest_common.sh@10 -- # set +x 00:31:50.335 10:12:27 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:50.335 10:12:27 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:50.335 10:12:27 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:50.335 10:12:27 -- spdk/autotest.sh@387 -- # hash lcov 00:31:50.335 10:12:27 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:50.335 10:12:27 -- spdk/autotest.sh@389 -- # hostname 00:31:50.335 10:12:27 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1701806725-069-updated-1701632595 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:50.335 geninfo: WARNING: invalid characters removed from testname! 00:32:16.893 10:12:53 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:20.192 10:12:57 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:23.515 10:13:00 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:26.051 10:13:03 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:29.345 10:13:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:31.893 10:13:09 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:34.462 10:13:11 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:34.462 10:13:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:34.462 10:13:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:34.462 10:13:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.462 10:13:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.462 10:13:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.462 10:13:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.462 10:13:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.462 10:13:11 -- paths/export.sh@5 -- $ export PATH 00:32:34.462 10:13:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.462 10:13:11 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:34.462 10:13:11 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:34.462 10:13:11 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715767991.XXXXXX 00:32:34.462 10:13:11 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715767991.9aTSLe 00:32:34.462 10:13:11 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:34.462 10:13:11 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:32:34.462 10:13:11 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:34.462 10:13:11 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:34.462 10:13:11 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:34.462 10:13:11 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:34.463 10:13:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:32:34.463 10:13:11 -- common/autotest_common.sh@10 -- $ set +x 00:32:34.463 10:13:11 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:32:34.463 10:13:11 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:34.463 10:13:11 -- pm/common@17 -- $ local monitor 00:32:34.463 10:13:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:34.463 10:13:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:34.463 10:13:11 -- pm/common@25 -- $ sleep 1 00:32:34.463 10:13:11 -- pm/common@21 -- $ date +%s 00:32:34.463 10:13:11 -- pm/common@21 -- $ date +%s 00:32:34.463 10:13:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715767991 00:32:34.463 10:13:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715767991 00:32:34.721 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715767991_collect-vmstat.pm.log 00:32:34.721 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715767991_collect-cpu-load.pm.log 00:32:35.656 10:13:12 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:35.656 10:13:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:35.656 10:13:12 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:35.656 10:13:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:35.656 10:13:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:35.656 10:13:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:35.656 10:13:12 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:35.656 10:13:12 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:35.656 10:13:12 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:35.656 10:13:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:35.656 10:13:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:35.656 10:13:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:35.656 10:13:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:35.656 10:13:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:35.656 10:13:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:35.656 10:13:12 -- pm/common@44 -- $ pid=101939 00:32:35.656 10:13:12 -- pm/common@50 -- $ kill -TERM 101939 00:32:35.656 10:13:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:35.656 10:13:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:35.656 10:13:12 -- pm/common@44 -- $ pid=101940 00:32:35.656 10:13:12 -- pm/common@50 -- $ kill -TERM 101940 00:32:35.656 + [[ -n 5030 ]] 00:32:35.656 + sudo kill 5030 00:32:35.669 [Pipeline] } 00:32:35.690 [Pipeline] // timeout 00:32:35.697 [Pipeline] } 00:32:35.714 [Pipeline] // stage 00:32:35.720 [Pipeline] } 00:32:35.740 [Pipeline] // catchError 00:32:35.749 [Pipeline] stage 00:32:35.751 [Pipeline] { (Stop VM) 00:32:35.766 [Pipeline] sh 00:32:36.044 + vagrant halt 00:32:40.232 ==> default: Halting domain... 00:32:46.802 [Pipeline] sh 00:32:47.081 + vagrant destroy -f 00:32:51.264 ==> default: Removing domain... 00:32:51.276 [Pipeline] sh 00:32:51.579 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:32:51.588 [Pipeline] } 00:32:51.606 [Pipeline] // stage 00:32:51.611 [Pipeline] } 00:32:51.627 [Pipeline] // dir 00:32:51.631 [Pipeline] } 00:32:51.649 [Pipeline] // wrap 00:32:51.657 [Pipeline] } 00:32:51.673 [Pipeline] // catchError 00:32:51.683 [Pipeline] stage 00:32:51.685 [Pipeline] { (Epilogue) 00:32:51.700 [Pipeline] sh 00:32:51.983 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:58.587 [Pipeline] catchError 00:32:58.589 [Pipeline] { 00:32:58.604 [Pipeline] sh 00:32:58.944 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:59.202 Artifacts sizes are good 00:32:59.212 [Pipeline] } 00:32:59.229 [Pipeline] // catchError 00:32:59.241 [Pipeline] archiveArtifacts 00:32:59.247 Archiving artifacts 00:32:59.438 [Pipeline] cleanWs 00:32:59.450 [WS-CLEANUP] Deleting project workspace... 00:32:59.450 [WS-CLEANUP] Deferred wipeout is used... 00:32:59.456 [WS-CLEANUP] done 00:32:59.458 [Pipeline] } 00:32:59.477 [Pipeline] // stage 00:32:59.484 [Pipeline] } 00:32:59.501 [Pipeline] // node 00:32:59.506 [Pipeline] End of Pipeline 00:32:59.543 Finished: SUCCESS